Timeline of Generative ai

Tools available for use, Tools demoing, and Tools next up

Happy Tuesday normies. We’re spending a little bit more time to walk through generative ai and companies at each level you can use. We’ll be mentioning Google’s new platform they just demoed, Genie.

The race for demos to get “leaked” shows how much competition is in this space (Sora from OpenAi, now Genie from Google’s DeepMind). Apparently, it’s a race to remind people you’re working on other stuff, regardless of the countless amounts of innovative tech you already have.

I think what they’re truly worried about is if they're not sprinting into the ai space, they’ll be left behind. I’ve really never seen a rush as big as this in anything in my life.

Everything we’re looking at today is from a consumer lens.

🧾 Prompt to Text 🧾 

Currently Live

First step in consumption for the masses would be prompt to text. You place a an input into the prompt box and it returns a string of text in a response. This can include all different types of models (whether smart or not smart 🤣). Current companies able to accomplish this for you:

🖼️ Prompt to Image 🖼️ 

Currently Live

Next step in consumption for the masses would be prompt to image. You place an input into the prompt box and returns an image instead of text. Now this one is a little bit more finicky as there is more detail needed to be correct.

I’ve noticed that a lot of the image generation models aren’t really great at understanding the small details including spelling of words and specific details on centerpoints of the image.

But if you’re looking to generate an image but don’t need something that’s super specific, would definitely check these out.

📽️ Prompt to Video 📽️ 

Currently Live

Next step in consumption for the masses would be prompt to video. You place an input into the prompt box and returns an video instead of text or an image. There’s a few companies that already have this live, but Sora from OpenAi will be rolling out soon (if they can get more computing power).

  • Runway ML - Currently Live

    • I created a video with the prompt: “Show me a dog being taken out on a walk” —> Here is that video (talking about detail: human’s feet are walking backwards, dog is not on a leash walking, weird lighting that pops up on left side of video… but it’s like 82% of the way there, which is farther than 0 haha

  • Synthesia - Currently Live

    • Takes a prompt and turns it into a human speaking it and saying it back.

Other companies are working on the technology but don’t have it live:

  • OpenAi’s Sora - Pending - Demo but not available to public

  • Whatever Google hasn’t even Demoed but is probably working on

🎮️ Prompt to Video… Game? 🎮️ 

Pending - Demo but not available to public

Next step in consumption for the masses would be prompt to video.. game?. You place an input into the prompt box and returns an video game. A playable video game.

This is not something I had on my bingo card for what was next in generative ai consumption but if people are going to use it and keep them on longer, then great. It’s all about collecting data. The longer they’re on, the more data they can collect on the user.

Google just demoed their new platform Genie a couple of days ago.

⏭️ What’s Next 👕 🏠️ 

Blue Sky Dreaming - Probably already in the works

Next steps up in the world of consumption would be 2 fold (in my opinion):

  1. Prompt to Saas

  2. Prompt to Physical Product

Prompt to tool & Saas is on there way. Where you can literally enter a prompt “build me an agent that will grow my twitter for me” and it does it. Or “build me an ai that removes files I haven’t used in 30 days on my computer” (if you want that), it could do that.

Prompt to Physical Product is where things move egregiously. This will be on very specific product niches to start.

My first thought would be a shirts (custom shirts). “Create a shirt in 4 different sizes that is black with the wording “I’m Awesome!” in the middle in big block letters. I also want the shirt soft like bella canvas shirts”. Boom

But as 3D printing becomes better - I don’t see why there wouldn’t be a text to house prompt where in the prompt, you give it your desired address, price budget, location, type of house, intricate details, and it spits out cost of house, time, and other pertinent information. If you like it, you click purchase and buy the house right there online. And then their machines get started.

Honorable Mention (Not good) 🟥 

Notice there’s no Adobe or Apple mentioned in this… I really don’t have a good feeling in my stomach in that regard for 2 of the biggest tech companies.

Seems like apple is really pushing for the hardware space staying in their lane with transforming the experience for phones and computers all to probably one singular lens (whether glasses or contacts away from the Vision Pro eventually unless for gaming or high intensity video graphics).

But Adobe is nowhere to be found.

I bet Figma rolls out with a prompt to UI/UX design option for designers along with other Ai tools.

Large entertainment companies will be shaken up by this.

Meta’s Llama 2 is only available for download for use.

We will see some household names of tech companies fall. It will be interesting to see which ones.

It’s a crazy world. What we will experience in the next 10-20 years, humanity has never seen. It won’t look a thing similar to what was available in the 90’s, 00’s, and even 10’s. Some for the better, and some for the worse. But regardless, innovation is here. Unprecedented.

See ya tomorrow,

Zander

💥 Subscriber Count 👉 2,330 🎉 

We crushed our 2,000 goal over the weekend and we’re almost there! Never would’ve believed we’d hit it that fast… can we hit 2,500 by month’s end - 2 days!!???! 💥 🎊  

Let’s keep the party going —— Do you know any other normal people?!
Share this sign-up link with them today!

⚡️ RESOURCES ⚡️ 

💣️ WANT MORE ai FNP? 💣️ 

Follow me on X ← DO IT ✖️ 

Connect with me on My LinkedIn ← I WANT TO CONNECT 😎 

Follow our Instagram - BOOM!!! 🧨