Updated: 29th April 2013
As you might imagine, I get lots of emails asking about what I use for my screencast recording. Rather than answer each one, I've decided to put this page together to document the process at a fairly high level so I can just direct people to this page.
Main Production Setup
- 15” Retina MacBook Pro - 2.7 Ghz Intel Core i7, 16GB RAM, 500GB SSD
- 2 x 27” Apple Thunderbolt Displays
- Apple Wireless Keyboard
- Wacom Intuos 5 Tablet
- Contour ShuttlePro 2
- Apple USB Superdrive
- 1 x Lacie 1TB Little Big Drive
- 1 x Western Digital 4TB My Book Thunderbolt
General Purpose Machine (Spare!)
- Apple MacPro (2 x 2.8 Ghz Quad Xeon,16GB RAM, 256 GB SSD)
- 1 x 20” Apple Cinema Display
- 3 x 750GB Drives
- Sonnet Tempo SATA E4P Card
- External - 2 x Fusion D500P with Multiplier Support (multiple 1.5 & 2TB drives configured as multiple RAID 1 drives)
- DroboPro with 8 x eSata Drives (11.8TB)
- DRobo 5 with 5 x 3TB eSata drives and SSD
- Audio Technics AT2005USB Microphone (connected via XLR)
- dbx-266XL - Limiter/Compressor
- Mackie Onyx 1220 Mixer
- Edirol UA-1ex USB Interface
- Capture and Edit - ScreenFlow
- Titling and Effects - ScreenFlow
- Encoding - Handbrake
- Movie Captioner
- MetaData Hootenany
Basic Recording Workflow - Mac Tutorials
I record the tutorial using ScreenFlow on the main production MacBook Pro in multiple segments, usually no more than 5 to 10 minutes long. The introduction is normally scripted, but the bulk of the tutorial is not scripted, relying mainly on bullet points created in OmniOutliner to structure the content of the tutorial.
After I’ve completed recording each segment, I’ll review and do a basic edit on the segment, mainly to get the audio content correct. If I completely screw up, I may discard the segment and start again! All captures are done at 1600x900 to allow for zooming at native resolution to 1280x720.
Basic Recording Workflow - iOS Tutorials
In order to capture the iPad or iPhone screen, I use AirPlay on iOS. On the main production MacBook Pro, I run an application called AirServer. This allows me to display the screen of the iOS device on the Mac, at which point I capture using ScreenFlow. The recording process from this point on is identical to that for recording Mac tutorials.
Post Production - Editing
Once I have created all the segments for the tutorial, I export audio track from ScreenFlow, convert it to MP3 using Fission and send off to CastingWords.com to have the audio transcribed into text.
The next step is to create the final version of the screencast. I use the Retina MacBook Pro for the final edit. This is where I review all the captured segments in sequence, add in any opening graphics and add in chapter titles and related markers. The major difference with the iOS tutorials is that I have to hand animate all the taps on the screen. With not having a mouse cursor on screen, it’s very difficult to follow along. The tap animations assist the viewer to see where I’m tapping on screen. It’s a considerable overhead, but I think it’s invaluable.
The Shuttle Pro 2 is a tremendous boon during this process and I have a number of custom "macros" set up to assist in the editing process. If I think a section of the screen needs to be zoomed or highlighted, this is mainly done during the final edit within Screenflow.
Post Production - Preparation
Once I've completed the final edit, I export to a Quicktime movie file directly from Screenflow using the lossless preset at 100%. As well as the members version, I duplicate the project file and create a “trailer” for the video to go into the free feed.
Hopefully by this time, I’ve had the transcribed text back from Castingwords. This is processed and converted manually into an .SRT file using an application called MovieCaptioner. This is a fairly painstaking process of matching the text with the spoken word.
The chapters titles are exported from the movie file using Metadata Hootenany and processed for the later stages of encoding.
A copy of the final version of the movie is loaded into Final Cut Pro to export as an overscan version of the final movie.
Post Production - Encoding
Once we have the two versions of the movie file, the processed chapter file and the subtitle file, I run a bash script that invokes Handbrake. Handbrake encodes the videos into several different formats and adds the chapters. The script also adds the podcast artwork to the file versions using ffmpeg.
The full quality version of the show can be up to 20 Gigabytes or even larger so Handbrake re-encodes the files to make the file sizes more manageable for distribution.
- 1280x720 HD - Members Version
- 1280x720 HD Overscan - Members Version
- 480x272 iPod/iPhone - Members Version
- 1280x720 HD - Trailer
- 480x272 iPod/iPhone - Trailer
- 1024x768 HD - Members Version
- 480x352 iPod/iPhone - Members Version
- 1024x768 HD - Trailer
- 480x352 iPod/iPhone - Trailer
Once the videos are encoded, I then duplicate the main file and use a custom built application to split the main file into individual chapter files for use on the main website.
Post Production - Distribution
The finished video files are then uploaded via FTP using Transmit to Libsyn as my hosting provider for my media files.
Once the files are uploaded, the information for the tutorial is loaded into my custom built CMS system. This is responsible for the dynamic creation of the show pages on the ScreenCastsOnline website as well as the creation of test and live RSS feeds to distribute the tutorials.
Once the information is entered, I can view previews of the website pages as well as download test versions of the videos to check everything is OK. I can then set the status of the tutorial to published and set a date in advance. Once the publication day arrives, the website is updated automatically to publish the show pages and update the RSS feeds.
And that's about it!