BEGIN:VCALENDAR
VERSION:2.0
PRODID:-//swoogo.com//NONSGML kigkonsult.se iCalcreator 2.27.21//
CALSCALE:GREGORIAN
BEGIN:VEVENT
UID:3c3934ce709aa806d3e8d8d60bc8afda8d283af4@swoogo.com
DTSTAMP:20240328T160541Z
DESCRIPTION:Over 5% of the world’s population has disabling hearing loss. C
ontent providers currently meet the needs of this audience by providing on
screen translation in one of the 200 different international sign languag
es. In some territories\, provision of such translated content is a regula
tory requirement. \n\nWhen this translated content is created\, sign-langu
age interpreters are commonly composited directly over a clone of the main
video content\, thereby generating a duplicate version of the programme.
But as the industry focuses increasingly on supply chain optimisation\, co
uld there be a more efficient way of creating and distributing signed cont
ent in a global market?\n\nThis paper presents a solution to this business
problem. It explores how IMF can be enhanced to enable compositing workfl
ows.\n\nThe component-based nature of IMF was designed to bring versioning
efficiency to the mastering and distribution process. However\, as ST 206
7 is currently limited to a single video track in any composition\, visual
ly translated content cannot benefit from the flexibility and efficiency a
vailable within IMF. Instead a separate full length video needs to be rend
ered with the sign language interpreter in vision.\n\nTo deal with this ch
allenge\, it is proposed that a synchronised auxiliary image track is crea
ted\, which can be composited onto the main image track. \n\nThis work is
currently being developed by the DPP in partnership with its member compan
ies\, with a view to submission to SMPTE as a plug-in for ST 2067.\n\nThe
resulting feature will provide benefit to the implementer throughout the I
MF lifecycle by enabling storage\, editing\, re-positioning and other proc
essing of the interpreter video before compositing. \n\nThis paper also ta
kes into consideration the challenge of different pixel-rasters between th
e main and the auxiliary video essence. The merits of the plug-in are demo
nstrated with a working prototype of a simple compositing process. Opportu
nities for complex and dynamic compositing via OPL and MetaRes are also di
scussed.\n
DTSTART:20191023T233000Z
DTEND:20191024T000000Z
LAST-MODIFIED:20240328T160541Z
LOCATION:San Francisco Room
SEQUENCE:0
STATUS:CONFIRMED
SUMMARY:Auxiliary Video Track in IMF
TRANSP:OPAQUE
X-ALT-DESC;FMTTYPE=text/html:Over 5% of the world’s population has disablin
g hearing loss. Content providers currently meet the needs of this audienc
e by providing on screen translation in one of the 200 different internati
onal sign languages. In some territories\, provision of such translated co
ntent is a regulatory requirement.
\nWhen this translated cont
ent is created\, sign-language interpreters are commonly composited direct
ly over a clone of the main video content\, thereby generating a duplicate
version of the programme. But as the industry focuses increasingly on sup
ply chain optimisation\, could there be a more efficient way of creating a
nd distributing signed content in a global market?
\nThis paper
presents a solution to this business problem. It explores how IMF can be
enhanced to enable compositing workflows.
\nThe component-base
d nature of IMF was designed to bring versioning efficiency to the masteri
ng and distribution process. However\, as ST 2067 is currently limited to
a single video track in any composition\, visually translated content cann
ot benefit from the flexibility and efficiency available within IMF. Inste
ad a separate full length video needs to be rendered with the sign languag
e interpreter in vision.
\nTo deal with this challenge\, it is
proposed that a synchronised auxiliary image track is created\, which can
be composited onto the main image track.
\nThis work is cur
rently being developed by the DPP in partnership with its member companies
\, with a view to submission to SMPTE as a plug-in for ST 2067.
\nThe resulting feature will provide benefit to the implementer throughou
t the IMF lifecycle by enabling storage\, editing\, re-positioning and oth
er processing of the interpreter video before compositing.
\nT
his paper also takes into consideration the challenge of different pixel-r
asters between the main and the auxiliary video essence. The merits of the
plug-in are demonstrated with a working prototype of a simple compositing
process. Opportunities for complex and dynamic compositing via OPL and Me
taRes are also discussed.
END:VEVENT
END:VCALENDAR