More AI may be coming to YouTube in a big way

by Yaron

YouTube content creators could soon be able to brainstorm video topic, title, and thumbnail ideas with Gemini AI as part of the “brainstorm with Gemini” experiment Google is currently testing, the company announced via its Creator Insider channel.

The feature is first being released to a small number of selected content creators for critique, as a spokesperson from the company told TechCrunch, before the company decides whether to roll it out to all users. “We’re collecting feedback at this stage to make sure we’re developing these features thoughtfully and will improve the feature based on feedback,” the video’s host said.

the brainstorm with gemini screen

YouTube

The feature will be accessible through the platform’s analytics menu, under the research tab, and will generate idea prompts for the topic of the video, specific talking points, and progression — even thumbnail suggestions using Google’s large language model’s image generation capabilities.

 
Simplifying Channel Pages Clarification and Inspiration Tab Improvements!

This marks Google’s second foray into including AI assistance in YouTube users’ creative processes. In May, the company launched a content inspiration tool on YouTube Studio that provides tips and suggestions for future clip topics based on viewer trends. For most any given topic, the AI will highlight related videos you’ve already published, provide tips on themes to use, and generate a script outline for you to follow.

Creators who are participating in the experiment will have access to both the inspiration tool and brainstorm with Gemini, after inputting their video idea in the Studio search bar. Per TechCrunch, the company is using this as an A/B testing method to see whether creators prefer one over the other, or end up using both.

It is unlikely that these two features will fall under the AI transparency guidelines YouTube set out in March, which focuses primarily on labeling AI-generated images and videos.

“Generative AI is transforming the ways creators express themselves — from storyboarding ideas to experimenting with tools that enhance the creative process,” YouTube said in a message at the time. “But viewers increasingly want more transparency about whether the content they’re seeing is altered or synthetic.”Andrew Tarantola is a journalist with more than a decade reporting on emerging technologies ranging from robotics and machine…

  • Computing

Amazon debuts AI ‘Shopping Guides’ for more than 100 product types
amazon best tech deals 5 12 2017 app smartphone shopping purchase program

Amazon debuted its new AI-powered “Shopping Guide” feature on Wednesday. It will help inform online shoppers about the technical details and brand leaders of more than 100 types of products, from dog food to TVs.

The AI Shopping Guides are arriving Thursday on the U.S.-based iOS and Android apps, as well as the Amazon.com website. They’re designed to reduce the time you spend researching a potential purchase by summarizing key points and important information alongside product listings filtered for your specific needs. “Whether you’re looking for the right camping tent for your first backpacking trip, buying the best shoes for running in the rain, or the perfect new kitchen appliance, you can turn to AI Shopping Guides for help,” the company wrote in a Wednesday blog post.

Read more

  • Computing

Google expands its AI search function, incorporates ads into Overviews on mobile
A woman paints while talking on her Google Pixel 7 Pro.

Google announced on Thursday that it is “taking another big leap forward” with an expansive round of AI-empowered updates for Google Search and AI Overview.
Earlier in the year, Google incorporated generative AI technology into its existing Lens app, which allows users to identify objects within a photograph and search the web for more information on them, so that the app will return an AI Overview based on what it sees rather than a list of potentially relevant websites. At the I/O conference in May, Google promised to expand that capability to video clips.
With Thursday’s update, “you can use Lens to search by taking a video, and asking questions about the moving objects that you see,” Google’s announcement reads. The company suggests that the app could be used to, for example, provide personalized information about specific fish at an aquarium simply by taking a video and asking your question.
Whether this works on more complex subjects like analyzing your favorite NFL team’s previous play or fast-moving objects like identifying makes and models of cars in traffic, remains to be seen. If you want to try the feature for yourself, it’s available globally (though only in English) through the iOS and Android Google App. Navigate to the Search Lab and enroll in the “AI Overviews and more” experiment to get access.

You won’t necessarily have to type out your question either. Lens now supports voice questions, which allows you to simply speak your query as you take a picture (or capture a video clip) rather than fumbling across your touchscreen in a dimly lit room. 
Your Lens-based shopping experience is also being updated. In addition to the links to visually similar products from retailers that Lens already provides, it will begin displaying “dramatically more helpful results,” per the announcement. Those include reviews of the specific product you’re looking at, price comparisons from across the web, and information on where to buy the item. 

Read more

  • Computing

Meta and Google made AI news this week. Here were the biggest announcements
Ray-Ban Meta Smart Glasses will be available in clear frames.

From Meta’s AI-empowered AR glasses to its new Natural Voice Interactions feature to Google’s AlphaChip breakthrough and ChromaLock’s chatbot-on-a-graphing calculator mod, this week has been packed with jaw-dropping developments in the AI space. Here are a few of the biggest headlines.

Google taught an AI to design computer chips
Deciding how and where all the bits and bobs go into today’s leading-edge computer chips is a massive undertaking, often requiring agonizingly precise work before fabrication can even begin. Or it did, at least, before Google released its AlphaChip AI this week. Similar to AlphaFold, which generates potential protein structures for drug discovery, AlphaChip uses reinforcement learning to generate new chip designs in a matter of hours, rather than months. The company has reportedly been using the AI to design layouts for the past three generations of Google’s Tensor Processing Units (TPUs), and is now sharing the technology with companies like MediaTek, which builds chipsets for mobile phones and other handheld devices.

Related Posts

Leave a Comment