AI and the (R)evolution of Creativity

12:05pm in Studio A

Embark on a concise and comprehensive journey through the evolution of creative artificial intelligence in visual entertainment and filmmaking. This immersive exploration invites creators to delve into the historical progression, present capabilities, and future implications of AI in their craft.

Gain valuable insights into the profound ways this transformative technology has influenced the industry, empowering creatives and push the boundaries of AI-assisted picture-based storytelling. Explore the practical applications of AI tools and techniques, enabling you to make informed decisions and harness the potential of this innovative field.

 

Download Transcript

J.T. Rooney  00:11

This next presentation I’m personally excited about because we spend a lot of great time with Pinar and Gary here in this studio, but also it’s been awesome to watch their journey throughout the rest of this community, all over the country and around the world at this point. So, without further ado, I’d like to welcome Pinar to the stage. Thank you so much.

Pinar Demirdag  00:38

Hello, is the mic working? Yes. Okay, perfect. All right. My name is Pinar Seyhan Demirdag. I know some of you and I look forward to get to know the rest of you. I am the co founder and the AI Director of Seyhan Lee, the AI production company that developed Cuebric, how many of you have heard about Cuebric so far? Okay, a healthy 75%. Wonderful. This is your second day during framework, and I’m pretty sure that your minds are filled with inspiration, but also like sometimes introduction of new Technology, understanding AI, how am I going to integrate it to my workforce can be challenging. So with my presentation today, I would like to tell you a personal story. I will be telling you the forces that brought me here in front of you. Of course, I will be showcasing a demo of Cuebric and how it works. But I thought it would be very nice to for once get to know the person that you’re dealing with. Because we all like have success. You know, we make great products or great creativity. But we don’t know the forces that brought this person to where they are. So with that cube Forbes titled our product, Cuebric generative AI comes to Hollywood. It’s a beautiful title. And our PR agents wanted me to tell you that this is not a paid place placement. Because apparently you can buy your way in Forbes and we were present. We were actually at XR g8 with JT and Tom Davenport, a writer that also writes for Harvard Business Review from Forbes came and we were like, What is he going to write and he titled this, so it’s very nice. So my story will be the story that starts in 2010. And it’s the story of a creative me that found herself in the center of the generative AI revolutions, conception. And she ended up being a software developer. I know it’s quite weird, but this is how it went. And this was my studio in Paris. I assigned my studio to together with the other co founder, my ex work partner, Viola. Our studio was called Panera and viola, we did several different amazing visual experiments, projects for different areas of culture. In 2011, we were the first to use face tracking Technology for creative use. What is that? AR face filters of Snapchat? We did a music video for Diplo, that girl is me. Viola is filming me and there’s an AR facefilter tracking my face with the bunny. And I thought, wouldn’t it be so cool to have like, all my friends chat with each other while looking like animals? I basically came up with Snapchat, it was 2011. And then I’m like, I’m not an entrepreneur. Forget it. You know, let me just make cool music videos. Regret. It’s a bitch, you know. So 2016, we made the world’s first holographic catwalk for a virtual fashion line. We had our own fashion line with a variety of very vibrant patterns. And in 2017, our collection for IKEA came out globally. To this day, I’m the first and only Turkish designer to have a global IKEA collection. In her name. I’m showing you all these different variety of rich textures. I’m here to talk about AI and software development. Why am I doing that? Because I realized that everything that I have ever done between 2010 and 2020. As an artist that specialized in aesthetic, recursive, rich, abundant, visually inviting textures and patterns brought me to this day that happened in 2018. The top row is Google’s research into the invention of generative AI. Of course, it didn’t happen like a light switch. It was not like today generative AI didn’t exist and tomorrow it exists. It’s more like a it’s more like a combination of several different researchers building on top of each other’s research, but a very important person that played a pivotal role in the birthing of generative AI back then we used to call it machine learning art or artificial neural networks. His name is Alexander more Vint Cerf. He’s like the Wizkid of Google Brain lab in Zurich. So this is his research. Chris Allah, another Google researcher, called future future visualization, you can Google it. It’s at distill dot pub, Google’s machine learning lab. So they inquired in how artificial neural networks built the futures of an object, you know, we have these diffusion models where something came out of nothing. So they seek to understand how, basically, the machine understands perception. And that research ended up looking like my body of work as an artist. The bottom row is my art, as you can see that the one on the left is an Ikea pattern that I had shown you earlier. And the others are my fashion patterns. And the top row is Google’s research. So this is like the first like, force of nature moment that I experienced in my life in 2018. I felt like, you know, the only AI that I ever heard of was Stanley Cuebric’s 2001 Space Odyssey, that AI to kill people, you know. And then the next day, like Google is sending me like the worst, biggest Technology company sends me an email saying, Hey, you’re so we discover something, and it looks like what you’ve been doing for the past 10 years. Why don’t you come around, and we make some cool shit together. So it was really like, I felt like, you know, I had a regret with Snapchat. And I’m like, Okay, God, I think this is a sign. So maybe I should take it. So what happened in the next year is that together with Google, we developed infinite patterns. It’s an app available to everyone online, I’m not sure if it’s still functioning. But Google, this is Alexander the genius that developed it together with Google, we made this tool for everyone to make art like me by using generative AI, simply by importing an image and pushing and pulling some sliders. You could make amazing patterns. And there I had really my elimination moment, I’m like, okay, so I used to do these things, these amazing patterns, and it would take me like a month or a week, or like several days. Now I’m like pulling some sliders to make something similar. I’m like, what, what’s going to happen to me? And how about other people that can make stuff like me in a matter of seconds. So it’s really like an existential moment. But that’s great, because I really like my light bulb lit when I could see my mother who has absolutely no training in art that started making art with the tool that I developed with Google. So this moment, these are the patterns that we created in 2019. It was, this period of three months that I worked with this tool really made me understand, like the mind of the machine, how iterations work, how you can create beauty out of sheer chaos, how parallel processes like a react to query. So what I really understood is that these 10 years that I was a creative that was doing like freely expressing herself, was actually the moment when I was creating my dataset, and perfecting my own algorithm to understand the future of generative AI and creativity that I am doing today. So I continued my research in generative AI, but this time to filmmaking. I’m not a filmmaker, this is not my background. But I was very fascinated how we can create like motion pictures by using artificial neural networks. This is a music video, to my knowledge is the first music video made with generative AI in 2019 for an artist called ASCII. And yeah, I was simply fascinated, but it started like not being enough. I was like, Okay, now that everybody can make artwork, generative AI, but how about filmmaking? Like, when will we start making like bringing this like, imminent immediate Technology to the filmmaking? And that moment, I decided that, okay, you know what, I really want to do this, it’s a decision, I made it with my conviction, I will be bringing this Technology to the art of filmmaking, but how am I going to do it, I was living in Europe, and I was an artist with no filmmaking background. But I have a spiritual teacher, I sometimes talk about her during my lectures, she always teaches me that the moment you make a decision, not a wish, there’s a ocean of difference between a wish and a decision when you make a decision. Okay, I’m going to do this no matter how wild it is. If you stop doubting yourself, that which happens, so things will get very personal in the next slide. Have you ever watched Interstellar? Is there anyone who didn’t watch Interstellar? No, right? Yeah, one person. Oh, my God. You’re in for a treat my friend. Interstellar. During Interstellar, Christopher Nolan shows us that love is the greatest force in the universe. That is that has capability of bending space and time. And that’s really what happened to me in 2019. While I was on my way to Google IO to present infinite patterns to San Francisco. I stopped over I had a layover in New York and stayed a few days and met the most handsome man I ever met. See my life? Who became my work partner later he’s here, Gary Koepke. And guess what his background is? You have only one guest. Yes. I had no idea you don’t choose your love with your mind your heart chooses your love. But I realized that your hearts capability of finding the right answer is 100 times more potent than your mind. But I think science still has to wake up to that. So in a span of three months after I made a decision that I will be bringing generative AI to filmmaking I found myself in United States, moving in with Gary, in our beautiful home in Wayland, Massachusetts. Gary did super cool stuff. He’s a director, he directed music videos for David Bowie, you too. He ran his own creative agency of 300 people directed commercials for Cadillac product, red, MTV. So and the best part was that it was during COVID. And he was stuck with Ma, who didn’t stop talking about generative AI. So I was basically bugging him left and right, this is gonna change the world, we have to do something together, you know. And he quit what he was doing. And I quit what I was doing my creative studio, and we decided to build a production company, an AI production company say, Holly, that will be the bridge between the cutting edge science of machine learning now called generative AI, machine learning arts, generative AI, and the art of filmmaking and creative entertainment. Since 2020, we have been serving the industry with several creative projects. This one that you’re seeing is the worst generative is the first generative AI VFX for a future film for a European future called descending the mountain. For those or does anyone? Are you guys familiar with generative AI? Like do you have experience working with several models? Like if I were to say like generative adversarial networks GaNS? Would you? Would you understand what I’m saying? Yes. So back then in 2020, like I always say, like, we were poor, we only had two models, GaNS and style transfers. Nowadays, kids have like, picture to video, you know. So we could only we had a limited space for being creative with these amazing technologies. And throughout this trailer, as you’re seeing like, this is again sequence of how the mountains are moving. The film is about a creative exploration. Like the creative correlations between meditation and psychedelics, so very interesting film. The other something very nice that we also did, we created and produced this short film for Becco, called connections. And we want to shiny big award DNA design and art direction like Oscars of advertising in VFX. In generative AI and future realities in 2021. We did several creative activations this one is for Outernet in London, the entertainment district, and humans. This one is also for Outernet. Also made with generative AI, we did several different of course, music videos. But as you can see, like we brought generative AI to entertainment, but it’s still not film. Why? Because just like the early days of animation, like it took Ed Catmull like a week to be able to move a hand, you know, let alone like making a full animation Toy Story. Like there’s a gap of few years between the initial like exciting new realm versus malleable, controllable, high quality, ending up on the big screen type of creativity. So Gary and I, we ventured into, again, another like wish and a decision with our hearts, to make a product to make a Technology to make something where filmmakers could start using this Technology, but only like to a place where it’s where it’s in the maturity, where it can be controlled, and deserve like the big screen. So there’s the studio called XR. You guys know what it is? So thanks to Connor McGill from pyxera, who introduced us to JT Francesca and Scott. And we ended up here on this very studio. It wasn’t another building back then. Yes, obviously. Details. So when we were visiting Pixar Studios, we were like wow, oh my god, virtual production is big screens is so cool. But who’s putting our tech into your tech? And they were like, to our knowledge no one and in 2022 like chat GPT didn’t come out like it was like generative AI was not world news yet. It was still like cutting edge fringe weirdo stuff. So we were like, Yeah, let’s have some fun like an absolutely with no like return of investment or how many users Salesforce like thinking of none of that. We were like, Why don’t we develop something together? You know, like, we will do the development, but then you will advise us how to do it. Right. And with that, I was sorry. What I wanted to say is that, we were thinking like, what, what should we be doing? Shall we integrate like generative AI as like, or like procedural AI, to Unreal Engine environment building? Or shall we have generative AI as skins for Unreal for your three dimensional environments, but then we started hearing from them and from other like virtual production stage owners that we will, interviewing that there’s actually a, like a barrier of entry to virtual production, why it’s expensive. Why? Because game engines are like beasts to operate. Okay, they’re the best thing since sliced bread. But not everyone is born out of the womb of their mother, knowing how to operate these beasts. And we were like, Why don’t we find a way to democratize this incredibly powerful Technology? And actually, the light bulb lit in Gary’s mind? He was like, why don’t we have the multi plane Camera of Disney to two and a half dimension creation? But why don’t we do it by bundling up several different AIS together, like generative AI is something different than other AIS, generative AIs. So functionality is to output generated Content that didn’t exist before. But until you get that you get there, you have segmentation? For example, segmentation, like being able to push a button for the image to be separated in different pieces is also AI. So that’s how in 2022, winter, around December, we built for three months, we found the developer happens to me to be a genuine, genuinely genius person that has his background in game development, AI masters and a full stack developer. I don’t know it’s like a unicorn with a giraffe skin. You know, it’s like super rarity. As a unicorn was not rare enough. So we called it Cuebric blue. Because the UI and everything changed, and then we announced it to the public that Hey, guys, we develop this and xr gave us the sandbox in how to do this. And of course, we relentlessly annoyed Aaron, the screen producer of xr studios every single day. Like we just had a coffee with him and he called it it was so basic and it was so such a like a proof of concept that in order for Aaron to test Cuebric on virtual on XR stage, we had to call our developer for him to turn his computer on. Because we were burning his own personal GPUs. We didn’t have like AWS that Aaron called him. Andre WS His name is Andre. I will of course get to what Cuebric does, but I would love to show you this beautiful video because ever since 2022, December, like the day after we announced that we received like 200 emails from virtual production studios, VFX companies, independent artists film studios, asking I want to demo and when and how soon and we’ve been supported by fuse and V studios as well. We did several demos on their stage on top of XR. So we have this one minute of video explaining what Cuebric does and our aspiration of why we developed it by always being independent in spirit and epic envision this has sound accent Okay, let’s start again. Can we put the sound a bit more up? Thank you? Printed thank you thank you Alright, let’s talk about Cuebric. So Cuebric, as stated earlier, streamlines the production of two and a half D environments for virtual production stages, but also is used by animators or game developers in order to dimensionalize their two dimensional images. So it has five main functionalities. The first one being image generation. I’m pretty sure every one of you has generated an image before Is there anyone that has never generated an image? Zero perfect. So you all have experience in prompting? Yes. Okay, fantastic. So cubic is a browser based tool, so you don’t need a GPU in order to prompt faster you burn our GPUs. It’s app.Cuebric.com has a very easy, we’re very happy with our UI incredibly easy and welcoming. Cuebric has four main image generation engines, and they all yield in Filmic results. First one is Cuebric we call it the base model. Cuebric classic yields in like Hollywood haze diffuse light, romantic, warm colors. Rich ambiences Cuebric moody, I love the name. It has a yields in muddy results more like grounded realism, grainy, low exposure, super 35. And Cuebric’s sci fi is more like Alexa 65. IMAX large world shots, clean pristine environments in like more contrast the colors. Hereby you can see the same prompt generated in four different versions with four different cinematic results. We these, these are all custom trained by us our we have been basing ourselves on stable diffusion. But the more but the way we improved. The current stable diffusion is really like a planet away of what it yields in results like on the streets of the internet. I will be getting back to the quality of our image generation by the end of the presentation. But I would like to streamline how you go from concept to Camera with the several different AI’s. The second is AI in painting and oil painting. Here’s a demonstration of how our like the good quality in painting that we have in Cuebric, and after this, hereby you’re simply erasing where you would like to present, I don’t know an olive tree, and you go back, and then you can now create a cherry blossom tree into existence on your image. And next week, we will be welcoming, like we’re currently in early beta, we have several different users that are very brave. I’m pretty sure that you all have experience working on a better product, it changes every two weeks. So you like something last week, and it’s no longer available, because it’s better. But that’s why we call them brave people. So this, we have been requested by our paying users that it would be amazing if you didn’t delete immediately what you don’t want to see. Instead, it will be amazing, if you would, if you were to use our brush, and have like a brush, like let’s say you want to turn this mechanical monster into a dinosaur, it would be nice that you create a mask on the environment. So then you can control exactly how your mask is by seeing what you’re going to delete at the back. So this workflow for in painting, we will be welcoming in next week. I don’t like to compare and I don’t like to compete. Maybe because I’m a woman. I don’t know. But I think that all all boats are lifted when rising tides arrive. But if ever, you are doing something great and then you’re in a How can I say like David and Goliath picture. Like in this moment, we’re obviously the David competing with gigantic other corporations that are offering similar solutions. But as of this week, we have a claim we have an in painting that is really better than anyone available in the market. Especially when it comes to shadows when it comes to color and when it comes to perspective, a small plastic fold pool filled with water. And next best comparison is the one on the top and ours is on the bottom. Here is the one on the left. Here’s the one second one we will always wasteland with destroyed buildings. As you can see, Cuebric also has a more like semantic success when it comes to understanding the prompt. Ai semantic segmentation. Does any of you have experience with semantic segmentation? Let me get back. Does any of you have experience with automatic segmentation by way of AI modules? You Aaron yes because you used to break Yay. So this is a new workflow that we will be welcoming. We currently have semantic segmentation but it works differently. But this is super cool like you will be hovering on with your mouse on the temp or on the bush or on the rock. And the moment you click, Cuebric will automatically isolate that object from the background. As you can see, you click all three, and you click on the eye and you can see what you just isolate it. And the even more cool part is, is that in the next weeks, we will be welcoming, automatic segmentation map. So all you have to do is to click on semantic segmentation, as you’re seeing the segmentation method on the top right, and the tool will automatically divide the image into several different pieces and will suggest which one of you you would like to cut. So you’ll say this one, this one, this one, and you will already have your layer ready for you. We offer basic image editing tools like feathering and eraser. So if ever you’re not satisfied with the corners, you can always like go in and correct by hand, we have a second segmentation, AI depth based segmentation, which has to do with the moment you choose depth based segmentation on the right, the tool offers you this depth range a slider on the right top. So the more like you can imagine a depth, depth based segmentation as like, you know, 2d images do not have depth, like they’re flat images, like their name suggests 2d, whereas depth based segmentation recognizes the dimension in an image by simply recognizing, recognizing it with other images that it has been trained with. So what does that mean? Let’s say that you want to isolate the girl in the virtual production stage, you can isolate her with the table, or without the table, or only her face. So imagine an invisible rope going through my head and my buddy. And by pulling and pushing the sliders, you can include the mic, you can include the computer or you only include my head. So with the slider, for example, this demonstrate that the person that was doing the segmentation pulled and pushed it in such a way that the mechanical monster was included with the terrain in the front and the terrain at the back. And AI super scaling, obviously, like it’s best practices if you are generating an image to generate it in HD, because all the text to image models are trained on HD like images. But it’s also perfectly fine to upscale them super sample them two times more, four times more or eight times more, coming up to 8k. So the image quality that you can export your workflow in Cuebric is currently 8k. We advise you to work maximum at 4k, but you can export in 8k. And let’s do a quick demo. How many of you are VFX artists that have experience in segmenting and image? 12345? Will you please tell me how many segments this image should be? Yep, great. We thought it would be the sky, the sea, the rocks at the back, the Colosseum, the temple and the rocks in front. So this is a fast forward demo of a minute and a half. And let’s see how we get there. So you have a brush and Cuebric you first select it and then you ideate okay, I would like to segment it here and here. And here. Okay, now you’re dissatisfied. You go to segment everything. Currently, our segmentation creates several different layers in the image, you delete everything other than the sky, because you start with the backplate. Okay, everything is deleted, then you have our in painting, you make the whole background into a one seamless sky. And then you select your C and you delete everything but the C and the ai ai didn’t select that part. So you have your eraser, you erase it. And then you make your C level. And in order to have the sky removed, you use your semantic segmentation which already recognized the sky. You deleted it. Okay, great. Now you’re gonna go to the rocks, you will be using depth, oh, sorry, at the rocks at the back. Have your image fill in with more mountains at the back. And of course you will be removing the sky and the sea because all you care is the mountains you already have a sea and the sky level. If ever AI didn’t pick up some corners, you can always erase it with your eraser. And now let’s go to the rocks. Basically rinse and repeat you do the same process for every layer and it will with the depth pay segmentation chooses the rocks and you do the same to the temple afterwards. And what happens after your work is done with Cuebric you exported by the way you don’t only need to generate images in our tool you can also Upload your DP plates in Cuebric and do the same same segmentation process in our tool. This is this guy’s as you can all recognize, what do you do after you are done with your work in Cuebric you download your PNGs. We will be welcoming a TIFF, and other formats in the future. But currently, we can only export PNGs. you layer them in your media server or in your favorite game engine. But the cool thing is that we’re working in a process that we will be welcoming this month, Cuebric will be able to export depth information in the form of depth maps in the form of depth and depth information. So you no longer need to guesstimate how far away or close your lair should be from one another, the tool will be able to tell that to you. And here is the mechanical monsters again shown in disguise. As you know, this is what the little Visualizer is what the Camera sees. The next one is an early test made by Yeah, Connor made this in the early days of Cuebric bloom. I just wanted to show that Cuebric layers also work in pixera. And, of course, if you’re not a virtual production specialist, and you’re an animator or simply a visual effects artists, you can also use Cuebric for your animations or Content creations. We regularly test with real world images and matte paintings. We have a beautiful friendship with Roscoe, the industry’s like matte painting library. So this image belongs to them. And what we realized is that for some weird reasons that I really don’t know why AI based segmentation tools, like Cuebric to my knowledge were the only one but please prove me wrong. Oh, there’s also another online one, but you require your own GPU for that anyways, work better in ai, ai generated images, which is crazy. So we constantly iterate on by using real world images, so then we can improve our segmentation methods. Alright, let’s go back to our cinematic image generation quality. What we really love is that, I mean, if you’re a prompt designer, like sometimes you need to write like a buco skip poem in order to express yourself. You feel me, right. But in Cuebric, you don’t, you don’t need a lot of effort to create something really amazing. Like this cinematic environment, it’s created only with the prompt of a small pond in a tropical forest, close up shots. And the only negative prompts that you had to write was black and white. Same goes for this one, post apocalyptic destroyed desert town wide angle shot. That’s all and you got this image. So when we created Cuebric, our aim, we aim to things already, like give it to the industry professionals for them to remove the burden in tasks repetitive and robotic tasks from their workflow, but also eight novel people, people that do not have experience with these powerful technologies, but give them a very simple UI and very easy like entry to AI and filmmaking by Renu removing barrier entries. Let me show off with some of our image generation qualities. So this is the Cuebric base model. It doesn’t really have a hustle house style. We hear from filmmakers that some other image generation tools have their own like very distinguished how styles that some may not really like. Here is Cuebric moody for those looking for more like grounded realism, monochromatic tones, low exposure. Cuebric classic with more light and warmth, diffuse light and sci fi more contrast the of course, the subject matter shouldn’t need to doesn’t need to be sci fi. But since it generates like large world epic type type of shots, we experimented a lot with sci fi, but here it is, when it’s used in a city shot. So where we are, is that we’ve been having 1000s of filmmakers that registered their interest on our better program. But we don’t want to in order not to disappoint anyone. We’re doing a lottery. So we started two weeks ago we’re absolutely closing our eyes and like having a mouse like going through the list and then seeing this one and we’re getting in touch with that person because a lot of people are waiting in line to start using it. We’re very lucky we have one of the eight major studios that started using it is starting to be used in more like like how can I how do you call like big events, and a lot of like pre visualization and VFX companies also purchased Cuebric so far. How many of you develop the product? It’s my first time. So I’m really okay. If you raise two hands, that means you develop more than one. Fantastic. Will you level with me? If I were to tell you getting the pricing right was harder than the developing the damn product? Yes, thank you very much. We had a lot of help from Chris bird, one of the inventors of d3 that turned into this guy’s he has a lot of experience in numbers, and creative technologies. So currently, Cuebric’s normal pricing vary between $156 a month, and $245 a month, depending on your commitment level. But we are offering to the adventurous early beta users, we call them VIP subscription, VIP guests, VIP users, several different discount discounted tiers, between $122 a month, $219 a month, depending on their level of commitment. So the reason why we have we have structured this price in this way is that after consulting several heavy generative AI, product users, we learned that there is this thing called token anxiety, or you guys have you guys heard what it is. So every like, at the end of the day, Cuebric is a tool where every single button you pushed burns a GPU. It’s not a like gender debate is not a cheap hobby. So a lot of the tools that are out there offer tokens, like you can make this much of environment, you can use it this many hours. And we realize that people do not really like that because they’re like, yeah, sorry, I don’t want to do it. Because I’m going to run out of my tokens, we would like our users to absolutely feel free. So even if you open Cuebric and push a button every single moment, it will still be $122 a month for you. And I will be flashing the next slide only for 30 seconds, I invite you to take your phones out, because I’m pretty sure that you would love that. So I will wait for everyone to take their phones out. What I told you earlier, is that we are not offering any seats to specific people. We’re doing it on a lottery base. But since you are we consider you our VIPs you’re exactly the type of people we would love to get their hands on Cuebric only for you for this very moment. This QR code gets you to Cuebric. It’s an offering for you to get your seat to buy your seat. We have 30 days cancellation policy, which acts as your 30 day free trial. If ever you are not ready for buying Cuebric getting your hands on Cuebric you can also register your interest on Cuebric.com. And wait in the list for the lottery to come. But again, one last time. If you want to get your hands on Cuebric today, this is your moments this QR code will only remain active for the next four hours after that. Sorry. Okay, well, thank you so much for listening to my talk. How understanding how the creative forces the force of love, the force of belief, confidence, that can get someone from the studio in Europe into the heart of film production in Los Angeles. And I would like to extend my thank you to a few people in this room. Of course JT, and older and Aaron and all them, XR family. Thank you disguise and pyxera for your ongoing friendship. And this is our team. Currently, we went from zero to 20 in a span of six months. We are currently raising. So there will be several opportunities to work with us. If ever working with Cuebric and our team is of interest, please come see me or Gary, we will love to speak to you. Thank you very much.

J.T. Rooney  39:11

Thank you so much. PNR. Really appreciate your time on this. We have time for questions. I’m sure there’s plenty. Also the picture of all of you taking pictures of that was so satisfying. You not to put words in your mouth, but you’ve spoken so wonderfully on this before at Volumetric last year and other events. There’s a lot of creators and designers and artists who are in this space. Can you speak a little bit about your ethos about using AI for creative work? Because I think it’s comforting to people.

Pinar Demirdag  39:37

Absolutely. Thank you for this. For those that follow me will know that. If you were to ask me why I’m doing this, I’m not doing this for making money. I’m not doing this for I’m not doing this for any other reason. Then giving people a chance to empower themselves to like identify what truly means thing of work is like I believe that we are conditioned to make money to survive, whatnot. But what we don’t realize is that the way we work is filled with several several unfun, tedious, repetitive parts in our workflow. So the point of why we are we have developed Cuebric or why I’m like a very like a very loud advocate of ethical AI. And the place of humanity when working with AI, is because I seek to identify the robotic tasks and eliminate them by using robots. So we can remember who we are the true creatives and visionaries. And the idea is, it’s our tagline to go from concept to Camera in minutes. And I know that like generative AI still has several flaws, like at the end of the day, like in the early days of RED cameras, I believe everyone was like tearing their hair out on stage, right? Like this chunky batteries. It’s very warm, it only lasts like an hour. It’s freaking annoying, but at the end of the day changed the way we produce. So this is where we are with generative AI, I kindly seek for those that are interested or intrigued is to have this is indeed in spirits like independent in spirit, but always like trusting their vision when working with AI.

41:27

Hi, I’m Alana. I work at a studio that has an ICP effect stage in New York. And I love this, this is super awesome thing that I’m really passionate about is trying to work with indie creators or nonprofits on the stage. And it’s hard when, you know, we don’t have the budget to create a scene for them. So I’m just curious in terms of the licensing that you shared, what does that mean for how many scenes you can generate? And what does usage or rights even look like?

Pinar Demirdag  41:54

Unlimited, it’s yours. According to copyright Bureau, if ever any type of production by a human involves humans, creative input, it’s yours. So as I said earlier, you can get Cuebric and push a button nonstop to generate for 30 days in a row, not sleeping. It’s all yours.

42:18

Awesome. I already texted my boss saying hey, we’re gonna buy this.

Pinar Demirdag  42:24

Yeah, and since you are professionals, please tell me we are. We have several. We’re interviewing a lot of people. And we’ve been told that between 50 to 75% of all like a game engine world development stage shoots can get away with a good two and a half d. Do you think that’s correct? Yes, I thought so. It’s awesome. One more? Your next? Oh, sorry.

43:11

Hello, hi. I’ll go here real quick. This is awesome as a VFX person, or has worked in compositing and sat in scenes and tried to make them work. Is there this is a you shown up from a virtual production standpoint, is there a plan to like have things that work with things like nuke for people who are actually doing compositing work? That isn’t an excellent environment.

Pinar Demirdag  43:33

We would love to speak to you. We have several plans, like in our roadmap. We would love to tell you what they are and also help and have you help us understand how Cuebric can become more compatible with compositing tools like Nuke, we do not have great experience in that we’re currently tackling like media servers, and game engines, but we’d love to like, ask you several questions. I will find you

44:08

uh, yeah, I just wanted to ask a little bit more about, you’re able to export all the layers into Unreal or blender and work with different Camera tracking systems?

Pinar Demirdag  44:15

Yes, currently, in the beginning, when we announced Cuebric, since our developer also has a background in game engines, we put an equal importance in having a seamless integration to Unreal, because so then Cuebric can open in Unreal and you can do everything without going to our browser. But what we quickly realize is that unreal is not meant is not a tool that is made to support industrial level browser based tools. So and then we quickly had to make a decision either we’re going to develop a great product, or we’re going to develop an unreal plugin. It had to be one or the other after we announced so we had to drop unreal at the very early stages, but currently like you make PNGs PNG is you can like you can make your stage and unreal. Put any Camera that you want and hand hand by hand, you can import your layers. Does that satisfy your question? Okay, thank you.

45:15

Hey, Sarah, this is an incredible professional tool, that’s going to unlock a lot for people that don’t have the maybe you know, the high end skill set. But I’m wondering whether or not you see a future where cubic becomes an end user experience that can pull on for engaging interactive and Immersive experiences, so that the end user is the one manipulating and playing with the environment to experience and create something for themselves. Or if they’re sort of draws a line at the professional AV and I appreciate that might be a roadmap that you can’t talk about. Yeah,

Pinar Demirdag  45:49

it’s so what you just introduced resides in a different dimension, where then my where my head operates. So it’s an area of exploration that we didn’t inquire yet, we’re, you know, what we realize is that it’s the first time I am developing a software. And the biggest challenge is to stay focused, like you want like you like a shiny object in a candy store, I want this, I want that I want this, I want that. But there’s really a discrepancy between what you have in your head. And when it manifests into an amazing UI UX experience, like hitting that is like hitting having a hit song. You know, like, there are several different ways you can integrate an idea into a great user experience. So we’re currently like, like, for example, I want to go back and show you. Like, for example, today, like this very week, when Cuebric sorry, this very week, this very weak, this is not available yet. What is available is that when you have this tempo, you push a button segment everything, and then the tempo and the little, like every single column is separated. Every single rock is separated, every single step is separated. So you get like 40 layers. I mean, it’s not ideal. I mean, it does the job. But it’s not like a hit song rate yet. So it took us a while to understand that it’s actually this is better. If you were to like have a visual map in order to eliminate this from that. It’s actually much nicer for the user. But it took us really like two months to figure this out. So currently, we really, yeah. So we’re currently really like focusing on only one thing satisfy virtual production, satisfy visual effects supervisors satisfy animators, and only after that, that we will venture in different areas. But thank you for the question. I would love to really like I think your mind works differently than mine. And I would love to like have a coffee with you.

47:53

Hey Pinar, I have a question for you regarding seating and like, are these images replicatable across different platforms? Are they like a session where you can open and stop and start progress? Or can you like repeat the process using a different seed and copy projects across each other?

Pinar Demirdag  48:11

So are you asking carrying? Can you like, what do you mean

48:15

for sure. So as an example, like this scene here, can I copy and pasted this scene into a different project, for instance, and have two people working on the same scene?

Pinar Demirdag  48:25

Oh, I see what you mean. I see what you mean. No, yeah. Like we give you like when you register to Cuebric, it gives you one seat. And that’s your seat. So we had to do it. In order to go to market fast. We had to do a lot of not confessions, what’s that English word? sacrifices. One being you cannot save your project. I know it sounds ridiculous. But at the end of the day, we realize that all the creators want is to get their hands on the layers, like curate the you get the layers. But then your project is lost when you close Cuebric it will be next month. So okay. Oh, yeah. Yeah. I said, Yeah, you download the layers, of course.

49:04

So it’s like a soft table that you’re working off of the image. But that’s really the the activities happening in that section when you’re working?

Pinar Demirdag  49:11

Yes. Like, we don’t have what Photoshop has as PSD. Like you make something and you can come back in next day. We don’t have that yet. But we’ll do in the next month.

49:21

Cool. Thanks so much. Thank you.

J.T. Rooney  49:25

Yeah, we have time for probably two more or so.

Patrick Wambold  49:28

Oh, hey, Patrick. Sorry. I was wondering how fast can it actually work? I mean, if you give it a very simple data set, like could you use the segmentation to pull out like GreenSky green screen Keane and do that in real time, or does it take too longin order to

Pinar Demirdag  49:43

how long does the segmentation take me? Yeah,

Patrick Wambold  49:46

I mean, could you get it down to like, 16 milliseconds, so you could do it in real time? Yeah. Okay.

Pinar Demirdag  49:51

We’re running on a tent. Okay. Can I see the NVIDIA thing? Yeah, it’s public. I can say that. Yeah. So And yeah, I’m gonna say it. Yeah. So I don’t know. Currently Cuebric runs on 810s are you familiar with 810 H100. Like this NVIDIA GPUs 810 is like the most like basic like, like the grid compatible for generative AI. And our generation times depending how many people are using vary between four to 10 seconds like any push any button that you push in Cuebric, like the maximum worst case scenario, you wait 14 seconds, but we’re going to start offering when we’re more stable, like when we’re out of early, better, we will start offering another workflow for more professional people with a new GPU by NVIDIA called Picasso that’s made for generative AI tools like ourselves, where image generation is one second. Segmentation, one second. Awesome. Thank you. Thank you.

J.T. Rooney  50:56

We have time for one more public Your hand was up there

Pinar Demirdag  50:59

was a you, sir, you were you had your head hendry’s since long time

J.T. Rooney  51:03

Run Run. Run.

51:08

Do you envision adding Reagan to it?

Pinar Demirdag  51:11

Rigging into this. For that we require a 3d visualizer

51:17

you can do you can do to the two and a half. I mean, after effects. So two and a half D rigging model.

Pinar Demirdag  51:26

We’d love to have a coffee with you too.

51:29

You’re gonna be drinking a lot of coffee.

Pinar Demirdag  51:31

I don’t drink coffee. I’ll have a decaf. But yeah, I would love to talk about that. Yeah. We didn’t think about it, obviously. But we’d love to ask your questions.

J.T. Rooney  51:43

Awesome. Well, that is the time we have please find painter and Gary. They’ll be here. We really appreciate your time. I’m sure there’ll be lots of good coffee conversations.

SUMMARY KEYWORDS

Cuebric, ai, generative, segmentation, image, tool, creative, day, unreal, create, vfx, layers, filmmaking, studio, semantic segmentation, called, gary, develop, virtual, month

SPEAKERS

Pinar Demirdag, J.T. Rooney, Patrick Wambold