Localization and access service workflows need an oil change.

26th July 2021

Next Post
by Dom Bourne, Take 1 founder

There’s never been more demand for subtitled and captioned video content. Not only are organizations like OFCOM and the FCC consistently increasing the regulatory requirements for localized and accessible programming, but the ability to watch video online and on the move has caused a spike in content consumption, changing viewing habits and driving new audiences to watch translated and titled material.

The struggle to meet unprecedented demand

But the increased demand has not translated (see what we did there?) into increased budgets, so production companies, language service providers and localization vendors are under pressure to create operational efficiencies to preserve our bottom lines. Enter automatic speech recognition, machine translations, subtitle generators, automatic lip synchronization, synthetic voices and the like – all of which have the potential to reduce the human workload, but which currently require rigorous quality control checks to bring them up to broadcast standards.

Hopefully, these AI-enhanced efforts to improve efficiency will pay off in the long term, but in the meantime, relatively straightforward (if expensive) processes have turned into complicated workflows with lots of moving parts. And while we’re all focussed on perfecting the individual systems within these workflows, it’s the data that runs throughout the content supply chain that could already start delivering the savings our sector has been searching for.

Greasing the levers of the content supply chain

Metadata is like the oil running through the engine of access and localization services – transferring vital information from one process to the next and “greasing the levers” of your workflow.  Whether the data in your workflow is between human transcribers and manually operated subtitling software or from an AI engine to a human interface for quality control, we’re all reliant on data passing from one process to the next to make our workflows more efficient.  And the earlier we can start leveraging the data in our workflows, the more operational efficiencies can be achieved. 

Traditionally, however, content preparation for original broadcast and versioning are treated as separate workflows, each managed by different stakeholders who are independently responsible for creating whatever materials they need for delivery, compliance and access services.  This means that the valuable metadata contained in the Post-Production or As Broadcast Script is not normally made available to localization and access service providers. Part of the reason for this is that scripts are typically produced as Word documents – a perfectly acceptable format for delivery of original program information to the primary network – but one that offers limited re-purposing capability for use in other areas. However, if we produce the Post-Production Script as interchangeable data, we can use that original data to inform the production of captions, audio descriptions and subtitles.   Then, dialogue transcription data becomes subtitle text, speaker IDs can inform the DFXP files and be used to determine the colors used for individual speakers and timecodes can be recycled to create in and out cues, with rules applied from the client’s style guide or the network’s guidelines.  As far as we’re aware, Take 1 is the only transcription and translation service provider currently producing Post-Production and As Broadcast Scripts in this way.  Our metadata harvesting platform, Liberty, supports the production of XML-based Post-Production Scripts, TTML timed text for captioning, and the re-purposing of this data into the various documents, files and reports needed throughout the global content production workflow, within a secure and scalable environment.  Our plans to virtualize Liberty in the cloud and open up our API gateway will provide more opportunities to integrate with partners’ and customers’ technology stacks and offer the entire industry the ability to extract value from the data we’re generating. 

There's never been a higher demand for localized content

The spoke(s) in the wheel

The danger of data-driven workflows is that, in the same way that dirty oil can make your car break down, so bad data can undermine your whole content supply chain.  The quality of your original data will determine the quality of your entire workflow, and mistakes can multiply when they’re translated into different languages and distributed around the world.  That’s why it’s important to either source your transcripts and translations from high-quality service providers or implement stringent quality control processes throughout your workflow – preferably both.

The other factor limiting our ability to create more efficient workflows is the absence of standardization.   It’s the equivalent of the gunk that makes your oil dirty and stops your car from running smoothly.  Every broadcaster and streaming platform uses a different combination of software and systems which means that unique workflows and templates have to be created for each client, making it difficult for service providers to streamline via automation.  The Interoperable Master Format could solve this problem (at least in terms of content delivery) but the format is yet to be widely adopted.

Time for an oil change?

It’s easy to find fault with a process that includes so much duplicated effort, where recreating data for different processes and repackaging the same material into various formats for different platforms and broadcasters are standard procedures.  To be fair, though, with so few businesses servicing the entire content supply chain, not many are in a position to effect change.  Take 1 has the unique advantage of being involved from content production right through to global delivery, from creating original transcription data right through to producing subtitles, captions and audio or video descriptions.  This means that we’re ideally suited to improving the quality of data throughout the content supply chain and making sure that this engine runs at maximum efficiency. 

As more services move to the cloud and blockchain technology starts to influence how we distribute and track content, there’s no denying that data is the key to unlocking efficiencies across the industry.  But with the ever-growing demand for new content, in so many languages, on so many different platforms, we can’t afford to wait for future technologies to deliver these time and cost savings.  We’ve already got the data we need to make the process so much better; we just need to start using it.