In the first of a two-part blog series, we looked at the benefits AI was already bringing to video production. In the second in our blog series, Smart Digital editor Sean Miller looks at some of its limitations and downsides…
It’s likely that companies or individuals will soon be able to generate their own video content using AI, what are the downsides to this?
SM: You can already generate content with AI – you add your text into it and the video is generated. But, already, there’s been quite a backlash against it.
I’ve visited shows where AI video was on display but the novelty has quickly worn off. In terms of marketing, solely generating content using AI instead of creating video which values your product or customer, is a risky move. It can come across as cheap, because people are aware that no human effort has got into it.
If you want to have two clips that look the same with a solid narrative, it is hard to create that using AI at the moment, though not impossible. There are claims that AI could make entire movies but we’re not quite there yet.
As always, (processing) time is of the essence…
SM: With complex tasks requiring huge processing power, right now, we don’t know if we will have the capacity to actually use some of AI’s future capabilities.
Processing time is one of the biggest limitations and bottlenecks to AI adoption. Some of its potential applications would burn up your computer as they would cost too much energy. Firstly, you’re limited by what you want to upload to AI servers, and, if they have lots of requests, then it’s going to be really slow.
For example, you might be on-site at an event and want to use an AI tool. You won’t know for certain that its going to work to clean up an audio file as there’s a queuing system prioritising users which takes hours to get through it – and it won’t even let you know your queue position.
This is one of the things you have to be mindful of if you’re going to put AI into your workflow. Yes, you can use it for speeding up the editing process, but you can’t rely on it.
AI can add hallucinations and howlers to your audio
SM: At the moment, we’re using AI for simple tasks which save time. With video cleanup, traditionally, you’d have your audio file and make adjustments to that file. So you add EQ, and it moves up and down a bit inside the AI. AI takes that file and creates a completely new, generated version – it’s not the original audio. One problem AI does have is that it can ‘hallucinate.’
For example, during an interview, and somebody’s talking in the background. AI knows this is a human voice and thinks it should ‘boost’ it. This causes it to ‘hallucinate’ sounds which don’t exist in normal human language.
AI is not perfect, we’ve seen the mistakes in ChatGPT and Google’s AI, which accidentally advised people that a safe amount of cigarette smoke when pregnant is about one two boxes a day…
The legalities of AI – a very grey area
SM: Every time you upload a clip to AI, it’s using that to learn.
If you’re uploading interview clips, the AI will study the person in that interview, and it’s going to use that to teach itself. Legislation hasn’t caught up with technology but, at the present time, there are no UK or EU laws stating that people have to give permission for AI to use their voice for training.
At present, we’re at the stage where it only needs a couple of minutes of your voice to replicate it perfectly, to the point where we can make people say anything and its sounds exactly like their voice. So if we upload a clip of an interview, a person’s voice can be generated by an AI.
In the future, I expect the EU will introduce some sort of GDPR AI-enhanced laws. The consequences of that might be that suddenly people might not be so keen to give interviews.
On the flipside, they might give their permission for an interview that’s just one minute long and let AI generate the rest of the responses, they can then edit their answers in post-production.