Reusing ssh-agent from Git Bash in Visual Studio Code

The Problem

When using Visual Studio Code with a password-protected SSH key (as they should always be), it got on my nerves that VSCode would ask me for that password every time it tried to connect to a Remote SSH Session.

Any time I tried to open a remote folder or restored a previous set of editing windows connected to remote folders, it would ask once per windows for my SSH key password.

This was on Windows 7, where there is no OpenSSH Client package built into the operating system as with Windows 10. If you’ve got a corporate laptop, this could be out of your control, which was my case. But this also happened with me on my private Windows 10 machine.

After installing Git Bash, you get a MINGW64 copy of ssh-agent that works fine with Visual Studio Code, and you can set up .bashrc to share a single copy of ssh-agent across all instances of Git Bash that you start.

And, you can export the SSH_AGENT_PID and SSH_AUTH_SOCK variables from Git Bash straight into the User Environment variables in your Windows session using the setx command.

The Solution

.bashrc

env=~/.ssh/agent.env

agent_load_env () { test -f "$env" && . "$env" | /dev/null ; }

agent_start () {
    (umask 077; ssh-agent >| "$env")
    . "$env" >| /dev/null ; }

agent_load_env

# agent_run_state: 0=agent running w/ key; 1=agent w/o key; 2= agent not running
agent_run_state=$(ssh-add -l >| /dev/null 2>&1; echo $?)

if [ ! "$SSH_AUTH_SOCK" ] || [ $agent_run_state = 2 ]; then
    echo "Starting ssh-agent and adding key"
    agent_start
    ssh-add

    echo "Setting Windows SSH user environment variables (pid: $SSH_AGENT_PID, sock: $SSH_AUTH_SOCK)"
    setx SSH_AGENT_PID "$SSH_AGENT_PID"
    setx SSH_AUTH_SOCK "$SSH_AUTH_SOCK"
elif [ "$SSH_AUTH_SOCK" ] && [ $agent_run_state = 1 ]; then
    echo "Reusing ssh-agent and adding key"
    ssh-add
elif [ "$SSH_AUTH_SOCK" ] && [ $agent_run_state = 0 ]; then
    echo "Reusing ssh-agent and reusing key"
    ssh-add -l
fi

unset env

This is a modified version of the GitHub suggestion (“Working with SSH key passphrases – GitHub Docs“)

setx is a Windows command that sets User Environment variables in HKEY_CURRENT_USER, which are then used by all newly-started processes:

“On a local system, variables created or modified by this tool
will be available in future command windows but not in the
current CMD.exe command window.”

Starting Git Bash, you’ll see:

Git Bash window showing output of .bashrc and ssh-add -l

Every Git Bash window you open after that will share the same ssh-agent instance.

Starting Command Prompt, you’ll see:

Windows Command Prompt window with list of environment variables

This shows that the SET_AGENT_PID and SET_AUTH_SOCK variables were set.

VSCode

Once you have the .bashrc set up and have opened up at least one Git Bash window, all Remote sessions will reuse the currently-running ssh-agent and you shouldn’t be asked for the key passphrase again.

But you need to solve one final problem:

Problem: VSCode keeps asking me “Enter passphrase for key”.

Solution: You have to use the ssh.exe from the Git Bash installation, e.g. C:\Users\Max\AppData\Local\Programs\Git\usr\bin\ssh.exe:

Visual Studio Code with the Remote.SSH: Path property set to Git Bash’s ssh.exe

The reason is due to the fact that the Windows built-in OpenSSH is executed ahead of the Git Bash SSH due to the PATH order.

Because of all the problems I had with using Windows OpenSSH, it may even be worth completely removing it.

You can do this by running Windows PowerShell as Administrator and running:

Remove-WindowsCapability -Online -Name OpenSSH.Client~~~~0.0.1.0

and

Remove-WindowsCapability -Online -Name OpenSSH.Server~~~~0.0.1.0

Shrinkr: Using SCons To Transcode Media

tl;dr

Shrinkr lets you convert a folder’s worth of audio or video files by running a simple command on a build script:

scons -f ShrinkrTranscode -k

In other words, it’s a command-line batch transcoder with rework avoidance.

By default it’s set up to convert any .mp4 or .mkv files it finds in current directory and rescale them to Full HD resolution using ffmpeg.

You can edit the ShrinkrTranscode file to change parameters and selected input files, everything is under your control, since it is essentially a Python script with SCons’ declarative build extensions on top.

tl;dw

Here’s a 2-minute video summarizing how Shrinkr works.

Setting It Up

You need to have the Python, SCons, and FFmpeg executables installed and discoverable via your system or user PATH environment variable.

Then, simply grab a copy of the ShrinkrTranscode file, put it in the folder with the files you want to transcode, and run it as above.

The Repository

All of the development files are located at https://github.com/nuket/Shrinkr.

The Long Version

A while back, before I was even starting to heavily use Shotcut for non-linear video editing, I wanted a way to automatically generate proxy editing files from original video files.

None of my computers have the hardware decoders necessary for HEVC video and I needed to resample the video down to a more manageable resolution.

So I wrote the original version of Shrinkr as a Python script that could take a JSON configuration file and convert an input video file into any number of output profiles (UtVideo, Huffyuv, and so on). I was basically reinventing the wheel, though.

I didn’t really ever use the original Shrinkr, as it would require also swapping the proxy files with those that the editing software would be using. This would require parsing in an XML project file, figuring out where all of the filenames were located in the object tree, rewriting them, and then writing that whole thing out.

Also, I had other things I wanted to work on. So I shelved it.

Fast forward several months: As of its 20.06.28 release, Shotcut has an integrated proxy editing workflow, which makes proxy file generation superfluous. This saves a ton of effort on the user’s part.

But what about regular transcoding? What options are available for batch transcoding there?

I had been archiving a number of screencast files recorded using Open Broadcast Software. These were recorded using ffmpeg‘s lossless x264 with whatever high bitrate it needed, but post processing the files for archival would often reduce the storage required by 66 – 75%.

So I wrote a new version of Shrinkr that essentially leverages the SCons build system to track which files need processing.

It is basically a build script, configurable however it is needed, using all of the power of the Python language.

This saves a ton of code, and gets right to the point:

Transcoding media files is conceptually identical to compiling software, so using a real build system makes sense.

Hope this helps anyone out there looking for a simple way to get their bulk transcoding done.

Archiving Screencast Videos

Because my computer is a bit old (Ivy Bridge i7-3770), it can only do 1440p screen capture at 60fps when using the -preset ultrafast setting in OBS Studio.

For a processor built in 2012, this is actually pretty good.

When I’m doing a screencast, I want the bulk of the CPU cycles to go towards the program running, not the video encoding. I don’t have a discrete graphics card, and I’m using a small form factor desktop anyways, so my options are limited and price / performance will suck with a 75 watt single-slot PCIe power budget.

Later, I go through and transcode the files into archival format. The above command tells the encoder (by default, libx264) to use a Constant Rate Factor of 0 (lossless) and -preset veryslow to squeeze as much data out of the file as possible, trading CPU time and computational complexity for storage.

For example, here’s a list of files and sizes associated with an archival codec and an editing codec.

You can see that the archival format can be ~4x – 10x smaller than the original files, particularly in cases where there isn’t a lot of motion, or large amounts of low-entropy data (i.e. screencasts where the background is a solid color).

If I later want to use the video as source material in my editor, I transcode it back to a low-res, low-complexity proxy file. I’d want to do that anyway since my computer would otherwise become a hiccuping mess when applying filters to 1440p or 2160p source videos.

Both sets of files above were created using Shrinkr, but in the future I will use an SCons build file to generate the archival files.

The general command is:

ffmpeg -benchmark -i input.mkv -crf 0 -preset veryslow -c:a copy -color_primaries bt709 -color_trc bt709 -colorspace bt709 output.mkv

Funnily enough, even though it’s considered lossless, when I run the files through the Netflix VMAF test, it will not identify them as identical. But they should be computationally the same and visually indistinguishable (both using -crf 0, at a fraction of the space.

VMAF score: 97.430436 for two files

The writer of the seminal streaming codec shootout “NVENC comparison to x264 x265 QuickSync VP9 and AV1 (unrealaussies.com)” makes the good point that VMAF is a bit fuzzy and will return less than a 100% match for identical files. But that this matches the fact that a real human probably wouldn’t be able to see this visually either.

I’m still getting up to speed on video editing, but already learning some of the tricks that help make it an enjoyable experience.

Tunisia Travelogue

Six months after traveling there, I finally made the time to do a proper long-form write-up of my trip to Tunisia:

https://vilimpoc.org/travel/tunisia-2019/

I had a chance to expand on the things I noticed there and talk a bit about how to get there and get around, what to see, and most importantly what to eat!

It’s a great place to travel and a fine introduction to North Africa, and I was surprised how fast it was by airplane to get there.