I attended the BBCtextav in London on Sept 18/19th. After a day of presentations from folks who had explored use of transcripts in video and audio content we split off into teams and worked on exploring some of the ideas further. I worked with a brilliant team focused on segmenting videos.

Adding segmenting into video


TheirStory allows people to record and then archive live video conversations (like Skype) over the web.

We want to allow users to navigate the films to the stories that may interest them.

For example: — https://floating-headland-69329.herokuapp.com/stories/597e939513a7070011a273a0

The viewer should be allowed to skip to the story about what happened in Kentucky without having to skim through the entire video to find the right spot.


Some examples of video segmenting we’ve looked at.




Close up of video player
  • Visual cues in the timeline to indicate important points in the video.
  • Ability to add comments/suggestions/notes etc
  • Con: relying solely on colour indicators can be a problem for accessibility (colour vision deficiency, or CVD) affects approximately 1 in 12 men (8%) and 1 in 200 women in the world)





TedTalks –


Aligning transcription to the video means you can easily follow the talk or skip to important parts.






Kentucky Oral History (OHMS)

OHMS makes it easy for individuals to navigate audiovisual media. This is done by utilizing 4 main components.

  • The video player
  • The index (segment list)
  • The transcript
  • Search (within the transcript and the index)

While this helps to make AV content much more accessible, the challenge with OHMS is the amount of time it takes to segment the AV material into its constituent chunks and to add metadata to each chunk. So, we wanted to combine the benefits of OHMS to the person viewing existing content with the benefits of Edthena in terms of its ease of use for annotating AV content.


The toggle switch on the right allows the user to search the chapter index, or the actual transcript, enabling the user to have even more control.



The hackathon – kicking off our project

After looking at some examples and putting together ideas on how we’d like this to work, we drew out some of the workflow expectations we would require in order to make chaptering possible. At a content creator level we figured this would at least require the input of time stamps to be used as chapter markers (although we did consider that with more time on a project like this it would be possible to pick up logical start and stop points from a transcript or using an algorithm).


User navigation for player


Using timecodes for chapter entry and exit points, on the front end I added chapter markers and colour coding to show how this may work on the video timeline as well as providing key links within the synopsis where a viewer can skip to the chapter they are interested in.


Research & Documentation in gDocs: https://docs.google.com/document/d/1eA5cwKZ189Nalhcz0gMUipa_2k0mFr103jVE1JyH_08/edit?ts=5ba25c7c

Repo: https://github.com/edsilv/iiif-video-nav-demo

Demo: https://iiif-video-nav-demo.netlify.com/

UV version of demo: http://universalviewer.io/examples/#?manifest=https%3A%2F%2Fiiif-video-nav-demo.netlify.com%2Fiiif-manifest.json&c=&m=&s=&cv=


  • Philo van Kemenade
  • Zack Ellis
  • Elena Walton
  • Edward Silverton


You can read this post on Medium