I used it create art, basically taking animal photos and using the dna sequence from that animal to recreate the photo using the 4 letters. (I did four passes using different size letters and layered in Gimp). People seem to like them, and they got into an art:science show.
Years ago. I dabbled in generative art. I even wrote a small Forth-like language to control the generation. It's basically controllable chaos with math or chaos within bounding patterns. The results were often unexpected. Some examples: https://imgur.com/a/UjWLy7s
One of my hobbies back in college was to write fun js fiddles [1]. It was super fun to have the time and curiosity to investigate something. I've been missing it more and more each passing day. I was super curious about generative art, procedural generation... I guess it is a negative term now, with AIs being able to create such complex stuff as video, audio and God knows what else. I was once working on a memes app where users could submit images. I was knee deep in how to identify duplicate images to keep my meme database "clean", so I was investigating cosine similarities... Few months went by and AI can do that better. Thats how ai feel now: AI can do it better, so why bother?
Hmm, isn't that a little like saying "now that we have cameras, no one needs to paint any more!" AIs can generate realistic video and images, but for me the fun of generative art is that it isn't realistic, it has texture, you can get a sense of what kind of patterns it will make, like echoes of the algorithm. Sure, you could probably prompt for some kind of geometric image, but if you asked for a little script that made them, then you could make tweaks and see what happens...
I started out in all the usual ways - inspired by Daniel Shiffman making generative art first using Processing, then p5.js, and now mostly I create art by writing shaders. Recently after being laid off from my job, I actually took my obsession further and released my very first mobile app - https://www.photogenesis.app - as a homage to generative art.
It's an app that applies various generative effects/techniques to your photos, letting you turn your photos into art (not using AI). I'm really proud of it and if you've been in the generative art space for a while you'll instantly recognise many of the techniques I use (circle packing, line walkers, mosaic grid patterns, marching squares, voronoi tessellation, etc.) pretty much directly inspired by various Coding Train videos.
I love the generative art space and plan to spend a lot more time coming up doing things in this area (as long as I can afford it) :-)
> I now have a small library of simulated materials: watercolor washes, dry brush strokes, felt-tip pens, cracked glaze, pencil fills. None of them are physically accurate. I’m not simulating fluid dynamics or anything like that, I don’t need to. They’re impressions, heuristics that capture enough of the character of a material to be convincing and evoke an emotion.
I find this to be a key insight. I've been working on a black-and-white film app for a while now (it's on my website in profile if you're curious), and in the early stages I spent time poring over academic papers that claim to build an actual physical model of how silver halide emulsions react to light.
I quickly realized this was a dead end because 1) they were horribly inefficient (it's not uncommon for photographers to have 50-100MP photos these days, and I don't want my emulator to take several minutes to preview/export a full image), and 2) the result didn't even look that good/close to actual film in the end (sometimes to the point where I wondered if the authors actually looked at real film, rather than get lost into their own physical/mathematical model of how film "should behave").
Forgetting the physics for a moment, and focusing instead on what things look and feel like, and how that can be closely approximated with real time computer graphics approach, yielded far better results.
Of course the physics can sometimes shed some light on why something is missing from your results, and give you vocabulary for the mechanics of it, but that doesn't mean you should try to emulate it accurately.
I read this interview with spktra/Josh Fagin and how he worked on digitally recreating how light scatters through animation cels, which creates a certain effect that is missing from digital animation - and it was validating to read a similar insight:
"The key isn’t simulating the science perfectly, but training your eye to recognize the character of analog light through film, so you can recreate the feeling of it."
Many years ago I went to a photoshop conference to try and get better. There was a talk about converting color photos to black and white. As a former bw film photog this interested me. Black and white film is a little wierd (some people put red filters on the lenses to increase contrast)
He showed some techniques. I think someone asked a question about the best way, but the presenter got a little ranty and basically said the way that looks best to your eye is the best way.
Both written by the same guy who wrote the Janet for Mortals book, about the Janet language, which supports both those sites.
I'm really wanted to see if I could combine those tools to make Arabic art inspired generative art. Anyone know of any projects which are doing that? There is a lot of crossover in modern generative art and ancient Arabic art.
I used to make generative art around 15 years ago as well, seems not much has changed in this aspect (note that this is not generative AI art). A few years later I remember using Processing.js after reading The Nature Of Code by Dan Shiffman as well, fun times. How time flies.
P5.js is pretty great.
I used it create art, basically taking animal photos and using the dna sequence from that animal to recreate the photo using the 4 letters. (I did four passes using different size letters and layered in Gimp). People seem to like them, and they got into an art:science show.
https://p5js.org/
Coding train has a lot of videos on using p5.js Some of them more sophisticated than the childish iconography appears. It’s pretty fun.
https://thecodingtrain.com/tracks
Years ago. I dabbled in generative art. I even wrote a small Forth-like language to control the generation. It's basically controllable chaos with math or chaos within bounding patterns. The results were often unexpected. Some examples: https://imgur.com/a/UjWLy7s
You may like https://c50.fingswotidun.com/
It's what I doodle with to generate images using a stack based program per pixel.
Every character is a stack operation, you have 50 characters to make something special.
These are really cool!
One of my hobbies back in college was to write fun js fiddles [1]. It was super fun to have the time and curiosity to investigate something. I've been missing it more and more each passing day. I was super curious about generative art, procedural generation... I guess it is a negative term now, with AIs being able to create such complex stuff as video, audio and God knows what else. I was once working on a memes app where users could submit images. I was knee deep in how to identify duplicate images to keep my meme database "clean", so I was investigating cosine similarities... Few months went by and AI can do that better. Thats how ai feel now: AI can do it better, so why bother?
1 - https://jsfiddle.net/u/victorqribeiro
Hmm, isn't that a little like saying "now that we have cameras, no one needs to paint any more!" AIs can generate realistic video and images, but for me the fun of generative art is that it isn't realistic, it has texture, you can get a sense of what kind of patterns it will make, like echoes of the algorithm. Sure, you could probably prompt for some kind of geometric image, but if you asked for a little script that made them, then you could make tweaks and see what happens...
Machines can quickly build repeatable, efficient, 'perfect' clothing - but people still knit.
Art isn’t useful or practical, clothing is.
Fellow generative artist here :waves:
I started out in all the usual ways - inspired by Daniel Shiffman making generative art first using Processing, then p5.js, and now mostly I create art by writing shaders. Recently after being laid off from my job, I actually took my obsession further and released my very first mobile app - https://www.photogenesis.app - as a homage to generative art.
It's an app that applies various generative effects/techniques to your photos, letting you turn your photos into art (not using AI). I'm really proud of it and if you've been in the generative art space for a while you'll instantly recognise many of the techniques I use (circle packing, line walkers, mosaic grid patterns, marching squares, voronoi tessellation, etc.) pretty much directly inspired by various Coding Train videos.
I love the generative art space and plan to spend a lot more time coming up doing things in this area (as long as I can afford it) :-)
> I now have a small library of simulated materials: watercolor washes, dry brush strokes, felt-tip pens, cracked glaze, pencil fills. None of them are physically accurate. I’m not simulating fluid dynamics or anything like that, I don’t need to. They’re impressions, heuristics that capture enough of the character of a material to be convincing and evoke an emotion.
I find this to be a key insight. I've been working on a black-and-white film app for a while now (it's on my website in profile if you're curious), and in the early stages I spent time poring over academic papers that claim to build an actual physical model of how silver halide emulsions react to light.
I quickly realized this was a dead end because 1) they were horribly inefficient (it's not uncommon for photographers to have 50-100MP photos these days, and I don't want my emulator to take several minutes to preview/export a full image), and 2) the result didn't even look that good/close to actual film in the end (sometimes to the point where I wondered if the authors actually looked at real film, rather than get lost into their own physical/mathematical model of how film "should behave").
Forgetting the physics for a moment, and focusing instead on what things look and feel like, and how that can be closely approximated with real time computer graphics approach, yielded far better results.
Of course the physics can sometimes shed some light on why something is missing from your results, and give you vocabulary for the mechanics of it, but that doesn't mean you should try to emulate it accurately.
I read this interview with spktra/Josh Fagin and how he worked on digitally recreating how light scatters through animation cels, which creates a certain effect that is missing from digital animation - and it was validating to read a similar insight:
"The key isn’t simulating the science perfectly, but training your eye to recognize the character of analog light through film, so you can recreate the feeling of it."
https://animationobsessive.substack.com/p/dangerous-light
Many years ago I went to a photoshop conference to try and get better. There was a talk about converting color photos to black and white. As a former bw film photog this interested me. Black and white film is a little wierd (some people put red filters on the lenses to increase contrast)
He showed some techniques. I think someone asked a question about the best way, but the presenter got a little ranty and basically said the way that looks best to your eye is the best way.
It is worth mentioning this site when talking about generative art, IMHO.
https://bauble.studio/
And
https://toodle.studio/
Both written by the same guy who wrote the Janet for Mortals book, about the Janet language, which supports both those sites.
I'm really wanted to see if I could combine those tools to make Arabic art inspired generative art. Anyone know of any projects which are doing that? There is a lot of crossover in modern generative art and ancient Arabic art.
I used to make generative art around 15 years ago as well, seems not much has changed in this aspect (note that this is not generative AI art). A few years later I remember using Processing.js after reading The Nature Of Code by Dan Shiffman as well, fun times. How time flies.
> An early phyllotaxis spiral, circa 2016.
What a strange claim. How late is too late to be considered early?