Auto-scrolling is the best scrolling! Basically, you click the middle mouse button once and a big circle icon is dropped at the current cursor position. Until you click the middle mouse button again the window will scroll at a speed and direction based on the relative position of the cursor to the circle. Move the mouse down slightly and the window will keep scrolling slowly forever. Move the cursor down further and the window scrolls faster. It’s really convenient for scrolling through long documents without having to hold down a button.
Auto-scrolling is great, but there is so much more!
So how do you make this button work properly? On an Alps device just update the following registry keys. You can use my registry file to do this quickly and easily. After updating you’ll just need to log out and back in again.
Windows Registry Editor Version 5.00
[HKEY_CURRENT_USER\Software\Alps\Apoint\Button]
"ButtonFunction3"=dword:00000003
"SPFunction3"=dword:00000003
lifehacker - The Many Things You Can Do with a Middle Click on Your Mouse
]]>Before we even install WSL I recommend setting up the built-in Windows OpenSSH’s SSH agent service. It stores your keys securely in the registry using DPAPI (encrypted with your user password) so you don’t need to worry about unlocking your agent constantly. As a bonus, it can be shared into WSL so you don’t need to run two agents or put your private keys inside the WSL filesystem.
The first step is to enable the service and start it
Get-Service -Name ssh-agent | Set-Service -StartupType Automatic -PassThru | Start-Service
Presuming you have a keypair already you can import it with ssh-add just like you would on Linux, or you can generate your own keys.
ssh-keygen.exe
ssh-add.exe
I won’t walk you through how to install and setup WSL as there are numerous guides out there. The only recommendation I’ll make here is that you make your WSL username the same as your workstation username. This makes a few things easier down the road.
wsl-relay is a great tool maintained by Lex Robinson that really makes WSL a lot more useful. I use it to make my Windows OpenSSH agent available in WSL and also to make my GPG agent from gpg4win available in WSL as well.
We can build it ourselves in Go fairly easily. Here’s the script I use on Ubuntu:
#!/bin/bash
# Install socat
sudo apt update && sudo apt install socat
# Install Golang
sudo apt update && sudo apt install golang-go
# Grab the wsl-relay repo
REPO_PATH="github.com/lexicality/wsl-relay"
go get -d "$REPO_PATH"
# Create the bin directory if we don't have one
if [ ! -d "$HOME/bin" ];then
mkdir "$HOME/bin"
fi
# Build a Windows binary for wsl-relay
GOOS=windows GO111MODULE="off" go build -o "$HOME/bin/wsl-relay.exe" "$REPO_PATH"
Now that we have wsl-relay.exe in our local bin directory we can fire it up any time we open a WSL session. I use the following snippet in my ~/.bashrc file to launch the appropriate wsl-relay pipes.
I’m using the presence of “explorer.exe” in my PATH to determine whether or not we are on a WSL session or a normal Linux session because I share the same .bashrc file among all of my home directories. This works really elegantly and has no obvious downsides that I can think of.
~/.bashrc
# set PATH so it includes user's private bin if it exists
if [ -d "$HOME/bin" ] ; then
PATH="$HOME/bin:$PATH"
fi
# Overrides when we're running in WSL
if [ $(type -p explorer.exe) ]; then
# Set up gpg-agent relay if we have it available
if [ -d ~/.gnupg ] && [ $(type -p socat) ] && [ $(type -p wsl-relay.exe) ]; then
# Only set it up if it's not already running
if ! ps aux | grep [s]ocat.*S.gpg-agent > /dev/null; then
[ -e $HOME/.gnupg/S.gpg-agent ] && rm $HOME/.gnupg/S.gpg-agent
( setsid socat UNIX-LISTEN:$HOME/.gnupg/S.gpg-agent,fork, EXEC:'wsl-relay.exe --input-closes --pipe-closes --gpg',nofork & ) >/dev/null 2>&1
fi
fi
# Set up ssh-agent if we have it available, and there isn't a real working SSH_AUTH_SOCK
if [ -d ~/.ssh ] && [ $(type -p socat) ] && [ $(type -p wsl-relay.exe) ] && [ ! -S "$SSH_AUTH_SOCK" ]; then
# Always set the environment variable
export SSH_AUTH_SOCK="$HOME/.ssh/w32-ssh-agent"
# Only set it up if it's not already running
if ! ps aux | grep [s]ocat.*openssh-ssh-agent > /dev/null; then
[ -e $SSH_AUTH_SOCK ] && rm $SSH_AUTH_SOCK
( setsid socat UNIX-LISTEN:$SSH_AUTH_SOCK,fork, EXEC:'wsl-relay.exe --input-closes --pipe-closes --pipe //./pipe/openssh-ssh-agent',nofork & ) >/dev/null 2>&1
fi
fi
fi
It’s important to note here that the first WSL session I open starts the process that is responsible for all future sessions until the process is stopped manually. I utilize “setsid” to make the socat processes run in a new session so that they survive even after closing my WSL window.
I use Ansible fairly extensively in my work and do so in WSL often. By default Ansible will not use ansible.cfg if it is in a world-writable directory.
> ansible localhost -m debug
[WARNING]: Ansible is being run in a world writable directory, ignoring it as an ansible.cfg source. For more information see
https://docs.ansible.com/ansible/devel/reference_appendices/config.html#cfg-in-world-writable-dir
To work around this issue I detect when I am running in WSL in my ~/.bashrc and explicitly set my ANSIBLE_CONFIG environment variable to the default. If you set this environment variable then Ansible will happily use the config file in the local directory even though it is world-writeable.
~/.bashrc
# Overrides when we're running in WSL
if [ $(type -p explorer.exe) ]; then
# Work around WSL permisions showing world write-able
export ANSIBLE_CONFIG="./ansible.cfg"
fi
If you’re on a Active Directory domain (which I’m guessing you are if you’re using WSL) you might want to be able to use those credentials inside of WSL. Setting up Kerberos inside of the WSL system is pretty straightforward and works well.
Once setup, you can use your Active Directory domain credentials in WSL to authenticate to servers that are also joined to the domain. I use this for running Ansible playbooks against Windows hosts with “ansible_winrm_transport: kerberos”.
The first step is obviously installing Kerberos
sudo apt install krb5-user
From there we’ll want to set up a very basic krb5.conf file that uses DNS to lookup all the Kerberos details
/etc/krb5.conf
[libdefaults]
default_realm = ad.example.net
dns_lookup_realm = true
dns_lookup_kdc = true
ticket_lifetime = 24h
renew_lifetime = 7d
rdns = true
forwardable = yes
At this point you should be able to run “kinit” and receive a Kerberos token from one of your domain controllers.
> kinit
Password for scott@ad.example.net:
> klist
Ticket cache: FILE:/tmp/krb5cc_1000
Default principal: scott@ad.example.net
Valid starting Expires Service principal
05/21/21 13:20:58 05/21/21 23:20:58 krbtgt/ad.example.net@ad.example.net
renew until 05/28/21 13:20:53
You can now use this Kerberos credential to authenticate to network devices via SSH, WinRM, or any other service that can accept Kerberos authentication.
> ssh -o PreferredAuthentications=gssapi-with-mic jump.ad.example.net -V
debug1: Authentications that can continue: publickey,gssapi-keyex,gssapi-with-mic
debug1: Next authentication method: gssapi-with-mic
debug1: Delegating credentials
debug1: Delegating credentials
debug1: Authentication succeeded (gssapi-with-mic).
You may also want to throw a gratuitous kerberos ticket renewal into your .bashrc as well to keep this ticket up to date. Conveniently this will show a warning if the credential is expired.
~/.bashrc
# Always renew Kerberos creds at login
if [ -f "/tmp/krb5cc_$(id -u)" ]; then
kinit -R
fi
What happens when we want to introduce changes from our Linux workstations to our Windows workstations? Certainly you could just carefully look at your diff, checkout master, and make the identical changes, but Git offers us some tools for editing history that make this even easier.
Assume we’ve been using our Windows and Linux branches and committing in changes. Right now my git history looks something like the following graph:
git log
* c98ec2b - (37 seconds ago) Add PowerShell profile - Scott Evtuch (origin/windows, windows)
| * 05bfbab - (14 minutes ago) Add another alias - Scott Evtuch (origin/linux, linux)
| * 11c0138 - (27 hours ago) Add SSH configuration - Scott Evtuch
| * e2dbae4 - (27 hours ago) Add useful git alias - Scott Evtuch
|/
* 443e495 - (29 hours ago) Add .gitconfig - Scott Evtuch (HEAD -> master, origin/master)
* 7d87207 - (31 hours ago) Add .gitignore - Scott Evtuch
Now that Windows has OpenSSH support, it probably makes sense to add our SSH configuration file into master so it can be shared between our Windows and Linux workstations. Basically what we want to do is move the commit 11c0138 into master, and then rebase our windows and linux branches on top of the new master. To do this, we’re going to use a cherry-pick.
DO NOT perform any of these commands in your real home directory. The working directory changes will almost certinaly break something. All of these commands should be run in your separate dedicated “working” copy of the repository.
Before we cherry-pick we need to checkout our master branch where we want the change to go. Once we’re there we run cherry-pick specifying the commit that we want to bring into our branch.
git checkout master
git cherry-pick 11c0138
git log
* b405b79 - (27 hours ago) Add SSH configuration - Scott Evtuch (HEAD -> master)
| * c98ec2b - (13 minutes ago) Add PowerShell profile - Scott Evtuch (origin/windows, windows)
|/
| * 05bfbab - (26 minutes ago) Add another alias - Scott Evtuch (origin/linux, linux)
| * 11c0138 - (27 hours ago) Add SSH configuration - Scott Evtuch
| * e2dbae4 - (27 hours ago) Add useful git alias - Scott Evtuch
|/
* 443e495 - (29 hours ago) Add .gitconfig - Scott Evtuch (origin/master)
* 7d87207 - (31 hours ago) Add .gitignore - Scott Evtuch
Voila! Now we have our commit from the middle of the Linux branch included in our master branch.
The final thing we need to do is rebase our Windows and Linux branches on the master branch so they both properly inherit the new changes. That should be as simple as running these commands:
git rebase master windows
git rebase master linux
Now our history should look like the following graph:
git log
* 7f138b1 - (16 hours ago) Add another alias - Scott Evtuch (HEAD -> linux)
* 554d880 - (2 days ago) Add useful git alias - Scott Evtuch
| * 7b4fa32 - (16 hours ago) Add PowerShell profile - Scott Evtuch (windows)
|/
* b405b79 - (2 days ago) Add SSH configuration - Scott Evtuch (master)
| * c98ec2b - (16 hours ago) Add PowerShell profile - Scott Evtuch (origin/windows)
|/
| * 05bfbab - (16 hours ago) Add another alias - Scott Evtuch (origin/linux)
| * 11c0138 - (2 days ago) Add SSH configuration - Scott Evtuch
| * e2dbae4 - (2 days ago) Add useful git alias - Scott Evtuch
|/
* 443e495 - (2 days ago) Add .gitconfig - Scott Evtuch (origin/master)
* 7d87207 - (2 days ago) Add .gitignore - Scott Evtuch
Notice that all of the branches from origin are still behind and inheriting from the old history. We can force push them up and then pull them down to our other machines easily now. The final history looks like this:
git push --all --force-with-lease
git log
* 7f138b1 - (16 hours ago) Add another alias - Scott Evtuch (HEAD -> linux, origin/linux)
* 554d880 - (2 days ago) Add useful git alias - Scott Evtuch
| * 7b4fa32 - (16 hours ago) Add PowerShell profile - Scott Evtuch (origin/windows, windows)
|/
* b405b79 - (2 days ago) Add SSH configuration - Scott Evtuch (origin/master, master)
* 443e495 - (2 days ago) Add .gitconfig - Scott Evtuch
* 7d87207 - (2 days ago) Add .gitignore - Scott Evtuch
The greatest benefit to tracking your various home directories in Git is the ability to control inheritence of information. The rebase function is incredibly powerful here and lets you introduce global changes across all of your workstations very easily.
You can continue the branching concept to add multiple layers. The “linux” branch becomes “all of my normal configurations plus any Linux-specific configuration”. The “linux-work” branch becomes “all of my normal Linux configurations, plus any work-specific configuration”.
Once you start tracking things this way it becomes really easy to fix errors or make tweaks upstream. Want to tweak an alias you’ve been using for years that is on all of your workstations? Just update master, rebase, and pull on all of your workstations.
]]>It’s very possible we have some configurations that we like in our current home directory that we haven’t introduced to the repo yet. In that case we don’t want to just destroy those changes by doing a hard git reset like we did in Part 1.
Make sure you have a backup of your current home directory and test this process somewhere less important first. You can definitely wreck your home directory if you screw up past this point.
Just like last time we’re going to initialize an empty repository in our home directory, set up our origin, and fetch.
cd ~
git init
git remote add origin git@github.com:example/home-dir.git
git fetch
remote: Enumerating objects: 7, done.
remote: Counting objects: 100% (7/7), done.
remote: Compressing objects: 100% (4/4), done.
remote: Total 7 (delta 0), reused 7 (delta 0), pack-reused 0
Unpacking objects: 100% (7/7), 1.28 KiB | 41.00 KiB/s, done.
From github.com:ScottEvtuch/home-dir
* [new branch] linux -> origin/linux
* [new branch] master -> origin/master
* [new branch] windows -> origin/windows
Next we’ll need to create a local branch that tracks our remote. We don’t want to do a checkout on this branch because that would destroy anything unique in our current home directory.
git branch linux origin/linux
Branch 'linux' set up to track remote branch 'linux' from 'origin'.
At this point we’re going to use some low-level git commands to effectively “checkout” this branch without actually modifying any files in our working directory. The end result should be that we have our local “linux” branch checked out and all of our local differences appear as unstaged changes.
This is very similar to what we did in Part 1 except instead of doing a soft git reset we are actually forcing HEAD to point to our linux branch so we can add commits as if we had checked it out normally.
git symbolic-ref HEAD refs/heads/linux
git reset HEAD .
Now that we’re on the branch where we want our changes, we can run a diff and look for things to commit.
git diff
diff --git a/.gitconfig b/.gitconfig
index 6249887..d79918e 100644
--- a/.gitconfig
+++ b/.gitconfig
@@ -1,3 +1,5 @@
[user]
name = Scott Evtuch
email = scott@example.net
+[alias]
+ com = checkout master
diff --git a/.gitignore b/.gitignore
deleted file mode 100644
index 461f989..0000000
--- a/.gitignore
+++ /dev/null
@@ -1,3 +0,0 @@
-*
-!/.gitconfig
-!/.gitignore
We’re missing our .gitignore file, which is expected. We can ignore that for now since it will be fixed when we hard reset.
Also we have an alias in our .gitconfig file that isn’t present in the repo’s version. Let’s stage the file and commit it.
git add .gitconfig
git commit -m "Add useful git alias"
Once we’re satisfied that we have all of the changes in existing files we need committed, do a hard git reset to make sure we’re fully up to date.
git reset --hard
HEAD is now at e2dbae4 Add useful git alias
We might also have some new files we want to add to the repo that only exist on this machine right now. To add those we’ll need to modify .gitignore to include them.
For this example I’ll be adding my .ssh configuration file. Keep in mind directories need to be added as well as files or git will not traverse those directories.
.gitignore
*
!/.ssh
!/.ssh/config
!/.gitconfig
!/.gitignore
Git status should now show that we have a change to commit and an unstaged file.
git status
On branch linux
Your branch is up to date with 'origin/linux'.
Changes not staged for commit:
(use "git add <file>..." to update what will be committed)
(use "git restore <file>..." to discard changes in working directory)
modified: .gitignore
Untracked files:
(use "git add <file>..." to include in what will be committed)
.ssh/
no changes added to commit (use "git add" and/or "git commit -a")
Since we’ve been really specific in our .gitignore file we can safely stage “all” files and do our commit.
git add .
git commit -m "Add SSH configuration"
Now you can push your changes up to remote and pull them down on other machines as well.
git push
At this point in my example we now have two workstations that have the “linux” branch checked out in their respective home directories. Changes from either can be committed and push/pulled to keep things in sync. If we wanted to we could also split off this branch into a “linux-work” branch for configurations that are specific to a corporate-issued workstation, for example.
In Part 3 I explore how we deal with changes that happen in the various branches we’ve set up, and how we can move them up into master.
]]>I know that several people will tell me that doing this is a horrible idea, and they are probably right. Let’s get some obvious ones out of the way:
Why this is a horrible idea:
Now that we’ve established why this will probably come back to bite me, let’s dive in to how to do it properly!
While it is tempting just to stroll into my home directory and run “git init”, I am choosing to create the empty repository in a new clean directory (coincidentally also located in a path beneath my home directory).
Right now this is mostly to avoid accidentally trashing my real home directory until we have things ready, but the separate copy of the repository will come in handy later for some other tasks.
git init ./repos/home-dir
In this new directory we’ll want to start by placing a .gitignore file that ignores everything except itself for now. The asterisk at the top tells Git to ignore all files, and the lines below it starting with an exclamation point explicitly tells Git which files we want to track.
.gitignore
*
!/.gitignore
This file is going to be crucial to preventing a lot of bad outcomes. We don’t want to trust ourselves to only commit files that we intend to, so the .gitignore file serves as an exhaustive list of all the files we intend to make a part of our repo. Let’s commit this now.
git add .
git commit -m "Add .gitignore"
Now we can start adding files that we will treat as global accross all of our workstations. In my case I want my .gitconfig file to be on all of my workstations. We’ll also add this file to our .gitignore negated with a ! at the beginning of the line.
.gitconfig
[user]
name = Scott Evtuch
email = scott@example.net
.gitignore
*
!/.gitconfig
!/.gitignore
We’ll add these changes as a commit to our master branch.
git add .
git commit -m "Add .gitconfig"
For my use case I’m going to keep separate branches for Windows and Linux configuration that is specific to each operating system. They will both branch off of the master branch which contains files that are the same for both operating systems. In the future this will allow us to introduce global changes and have them appear in both branches via rebase.
Let’s create those two branches off of master now.
git branch windows
git branch linux
I’m also going to set up a new remote on a private GitHub repository for easy access. Depending on your security requirements, you may want to store your remote somewhere else.
git remote add origin git@github.com:example/home-dir.git
git push -u origin --all
Alright now that we have our basic repository set up, we’ll want to set up an existing home directory to track this repo.
Make sure you have a backup of your current home directory and test this process somewhere less important first. You can definitely wreck your home directory if you screw up past this point.
Git doesn’t like cloning into a directory that already has files so let’s just initialize an empty repository in our home directory, set up our origin, and fetch.
cd ~
git init
git remote add origin git@github.com:example/home-dir.git
git fetch
remote: Enumerating objects: 7, done.
remote: Counting objects: 100% (7/7), done.
remote: Compressing objects: 100% (4/4), done.
remote: Total 7 (delta 0), reused 7 (delta 0), pack-reused 0
Unpacking objects: 100% (7/7), 1.28 KiB | 41.00 KiB/s, done.
From github.com:example/home-dir
* [new branch] linux -> origin/linux
* [new branch] master -> origin/master
* [new branch] windows -> origin/windows
I’m assuming during initial setup you only copied files from your real home directory, so there shouldn’t be any changes introduced by checking out our branch. However, to do things as safely as possible we’ll still want to inspect the diff between our current home directory and what the repo will set things to.
To view the diff we’re going to soft reset to the remote branch we want to be on, unstage all of our local changes, and then show a git diff. The soft git reset won’t allow us to add any changes since our HEAD is still pointing to an empty master branch, but we’ll learn how to work around that in Part 2.
git reset --soft origin/linux
git reset HEAD .
git diff
diff --git a/.gitconfig b/.gitconfig
index 6249887..eda2d3c 100644
--- a/.gitconfig
+++ b/.gitconfig
@@ -1,3 +1,3 @@
[user]
- name = Scott Evtuch
+ name = scott evtuch
email = scott@example.net
diff --git a/.gitignore b/.gitignore
deleted file mode 100644
index 461f989..0000000
--- a/.gitignore
+++ /dev/null
@@ -1,3 +0,0 @@
-*
-!/.gitconfig
-!/.gitignore
Looking at the diff we can see that the only changes between our current home directory and the one provided by the repo are:
The .gitignore file is obviously expected and we probably don’t care about the name capitalization so let’s do a git checkout for real. This action will permanently destroy any changes that you saw in the diff.
git reset --hard origin/linux
HEAD is now at 443e495 Add .gitconfig
git checkout linux
Switched to a new branch 'linux'
M .gitconfig
D .gitignore
Branch 'linux' set up to track remote branch 'linux' from 'origin'.
Congratulations! We’re now on track with the remote branch. This means now we can commit new updates to files and push them up to our remote. In the future if there are changes from other machines we can pull them down and introduce them locally too.
In my example scenario this is where I would start adding the files that are specific to Linux that I don’t want in master. My master branch is the parent for both Linux and Windows so it should only contain files that belong in both.
Check out how to safely clone the repo to a second machine in Part 2.
]]>If you just follow the instructions that exist in Microsoft documentation you are probably going to run into weird issues down the road. After many failed attempts over the years I have finally found a process that lets you properly join a domain without ever having network connectivity to a writeable domain controller. Here it is!
Updated 2022-01-18: Changes for Server 2019 compatibility. Thanks to Markus Hampe for pointing out the issue and helping troubleshoot.
The first step is pretty obvious. We’re going to have to create a computer account in our domain with a known password that we can provide to the computer to join with.
$ComputerPassword = 'Password!'
New-ADComputer -Name 'NewComputer' -AccountPassword ($ComputerPassword | ConvertTo-SecureString -AsPlainText -Force)
Once that’s done we’ll want to add it to the security group that allows its account to be replicated to our Read-Only Domain Controllers. Best practice is to create a security group for each physical site or RODC and add those, but for this example I’m just going to use the default group.
$ComputerName = 'NewComputer'
Get-ADGroup "Allowed RODC Password Replication Group" | Add-ADGroupMember -Members (Get-ADComputer $ComputerName)
This next part is fairly critical and not super obvious. If your computer only has access to a Read-Only Domain Controller it won’t be able to update its Service Principal Names after joining. Without these in place Kerberos is going to fail and you’re going to run into a lot of weird issues. To fix these problems you can prepopulate the required SPN entries manually.
$ComputerName = 'NewComputer'
$Domain = 'example.net'
$SPN = @{
Replace = @(
"TERMSRV/$ComputerName.$Domain",
"WSMAN/$ComputerName.$Domain",
"RestrictedKrbHost/$ComputerName.$Domain",
"HOST/$ComputerName.$Domain",
"TERMSRV/$ComputerName",
"WSMAN/$ComputerName",
"RestrictedKrbHost/$ComputerName",
"HOST/$ComputerName"
)
}
Get-ADComputer $ComputerName | Set-ADComputer -ServicePrincipalNames $SPN
Likewise your computer is probably not going to be able to register its own DNS records without access to a writeable Domain Controller so we’ll create that record manually.
$ComputerName = 'NewComputer'
$Domain = 'example.net'
$ComputerIP = '192.168.100.200'
Add-DnsServerResourceRecord -ZoneName $Domain -A -CreatePtr -IPv4Address $ComputerIP -Name $ComputerName
The final step is to actually run the join on the computer. In my experience trying to join to a RODC only works if the computer already has the appropriate name before starting the join. Before we do anything we’ll need to rename the computer and restart.
Rename-Computer -NewName 'NewComputer' -Restart
Once that’s done run the PowerShell commands to join to the specific RODC using a pre-populated password.
$Domain = 'example.net'
$Rodc = 'rodc01.example.net'
$ComputerPassword = 'Password!'
$ComputerCredentials = New-Object pscredential -ArgumentList ([PSCustomObject]@{
UserName = $null
Password = ($ComputerPassword | ConvertTo-SecureString -AsPlainText -Force)[0]
})
$Options = 'UnsecuredJoin,PasswordPass,JoinReadOnly'
Add-Computer -Domain $Domain -Options $Options -Credential $ComputerCredentials -Server $Rodc -Restart
Voila! If everything went well you should now be able to log in to this computer using domain credentials and access domain resources. In my experience it may take one or two reboots before Group Policies start applying properly.
Assuming you have followed the instructions on Jekyll’s website and have a functional installation, we’ll want to start by creating a new Jekyll site
jekyll new mysitename
Running bundle install in mysitename...
New jekyll site installed in mysitename.
We’ll need the site to be a git repository so we can push to GCP in order to kick off our CI/CD automation. I’ll assume from here on out that you have installed Git and are already familiar with its basic usage.
cd mysitename/
git init
Initialized empty Git repository in mysitename/.git/
Adjust the _config.yml to suit your needs by updating the site name, URL, and any other options. Once you’re satisfied let’s do a git commit. This will give us a nice snapshot to revert to before messing with anything complicated like themes or plugins.
git commit -m "Initial jekyll setup"
[master (root-commit) f6b9286] Basic setup
7 files changed, 215 insertions(+)
create mode 100644 .gitignore
create mode 100644 404.html
create mode 100644 Gemfile
create mode 100644 Gemfile.lock
create mode 100644 _config.yml
create mode 100644 about.markdown
create mode 100644 index.markdown
Done
We can confirm our Jekyll site is properly set up by serving it locally. This can be useful if you’d like to test how something looks before committing it, but from this point forward we won’t actually need to have Ruby or Jekyll available to publish and make updates do the site. Everything from this point forward will be handled by Cloud Build on GCP.
jekyll serve
Configuration file: mysitename/_config.yml
Source: mysitename
Destination: mysitename/_site
Incremental build: disabled. Enable with --incremental
Generating...
Jekyll Feed: Generating feed for posts
done in 0.328 seconds.
Auto-regeneration: enabled for 'mysitename'
Server address: http://127.0.0.1:4000/
Server running... press ctrl-c to stop.
The first thing we need to do on GCP is to create a new project to house our site. I’ll assume you’ve already gone through the process of setting up a Google account and enabling access to Google Cloud Platform. I prefer to set up one GCP project for each site I’m hosting, and this actually becomes almost mandatory when using App Engine. Each project can only host a singular App Engine application. You can use multiple “services” in App Engine to host subdomains, but all App Engine services in the project share the same custom domain.
We’ll also need to create an App Engine app for us to deploy the site to. You’ll be asked to select a region and a language/environment. It really doesn’t matter which option you choose for the language since the app.yaml we define later will do this for us.
Next we’ll need to enable the App Engine Admin API so that Cloud Deploy can use it to publish new versions of our site as they are pushed via Git. Cloud Deploy will use its own service account to publish the app via this API, but it has to be enabled manually first.
In this example we’re going to be using Google’s “Cloud Source Repositories” to host the Git repo that contains our site. Cloud Deploy does also support watching third party repositories like GitHub, but Cloud Source Repositories are free on GCP for a single user using 5 projects or less. Create a new repository attached to the project we just created. I usually just name the repository the same as the project name, but you may give it any name.
Once the repository is created the GUI will give us a few options for populating it. Since we already have a local repository, we’ll want to choose “Push code from a local Git repository”. If you haven’t done so already you should register your SSH key in GCP so you can access Cloud Source Repositories over SSH. You can also use the Google Cloud SDK or manual credentials for authentication, but most people are probably already used to using SSH keys for Git.
Go back to your local Jekyll site repository and run the commands provided by Google
git remote add google ssh://user@example.net@source.developers.google.com:2022/p/primal-turbine-268514/r/myawesomeproject
git push --all google
Enumerating objects: 9, done.
Counting objects: 100% (9/9), done.
Delta compression using up to 4 threads
Compressing objects: 100% (8/8), done.
Writing objects: 100% (9/9), 3.60 KiB | 3.60 MiB/s, done.
Total 9 (delta 0), reused 0 (delta 0)
To ssh://example.net@source.developers.google.com:2022/p/primal-turbine-268514/r/myawesomeproject
* [new branch] master -> master
Now that we have our repository and an empty App Engine setup, we can enable Cloud Build to automatically deploy our Jekyll site. Before we do anything you’ll have to enable the Cloud Build API so that we can add triggers.
Next, create a new Cloud Build trigger to watch the repository that we just created. You can set most of these values to whatever you would like. I set my trigger type to “Branch” and matched on the regular expression ^master$ to only run a deploy when I push commits to the master branch. You could also use tags to control when deploys happen. The only real requirement here is to choose Cloud Build configuration file for your build configuration. Leave the default as “cloudbuild.yaml”.
The final step is to go to the Cloud Build settings page and make sure the “App Engine Admin” role is enabled for our Cloud Build service account. It will need these permissions to deploy the app on App Engine. At this point we’re ready to push code and have it automatically deployed to App Engine.
We’ll be adding two files to our Jekyll repository to enable automatic builds and deployment to App Engine. The first is an “app.yaml” manifest that instructs App Engine on how to handle request to the site. The second is a “cloudbuild.yaml” that instructs Cloud Build on how to spin up a Jekyll container to build the static files for the site.
The app.yaml file gets pretty complicated because we’re trying to offload as many requests as possible to the static file handlers. We need to account for as many types of requests as possible and make sure they are routed to the correct files. Ideally the only types of requests that actually fire up an instance are 404s. In this case we need the actual Jekyll app to respond because we can’t return a friendly 404 page and also set the response code to 404 using the static handlers.
We use a couple different flags on the Jekyll serve command to avoid some issues:
I set my app to basic scaling using the smallest instance class with a 1 minute timeout. The instance is really only ever going to have to handle 404s so there is no reason to change this unless you want your 404 pages to respond faster. In my testing they usually took about 2-4 seconds to respond if there was no running instance. One minute is the minimum amount of time billed for an App Engine instance, so we timeout the instance after that point.
Depending on your use-case it might be better to use automatic scaling instead of basic scaling. If you suspect that your app will receive a lot of traffic and you are below the free tier limits, automatic scaling will be less expensive. App Engine allows 28 instance-hours of automatic scaling instances but only 8 instance-hours of basic scaling instances under free tier.
All of the handlers except for the first and last ones basically try to translate a friendly URL into a real file path. “/blog” tries to find “/blog.html” or “/blog/index.html” before sending the request to “script: auto” and spinning up an instance. “require_matching_file: true” is the special sauce that allows this sort of functionality.
runtime: ruby25
entrypoint: bundle exec jekyll serve -P $PORT --safe --skip-initial-build --no-watch --trace
instance_class: B1
basic_scaling:
max_instances: 1
idle_timeout: 1m
handlers:
# Real file names
- url: /(.+)
static_files: _site/\1
upload: _site/.*
require_matching_file: true
secure: always
redirect_http_response_code: 301
# Directory indexes
- url: (/.+)?/
static_files: _site\1/index.html
upload: _site/.*
require_matching_file: true
secure: always
redirect_http_response_code: 301
# Directories as files
- url: /(.+)/
static_files: _site/\1.html
upload: _site/.*
require_matching_file: true
secure: always
redirect_http_response_code: 301
# Friendly extensionless file URLs
- url: /(.+)
static_files: _site/\1.html
upload: _site/.*
require_matching_file: true
secure: always
redirect_http_response_code: 301
# Friendly extensionless directory URLs
- url: /(.+)
static_files: _site/\1/index.html
upload: _site/.*
require_matching_file: true
secure: always
redirect_http_response_code: 301
# Catch-all
- url: .*
script: auto
secure: always
redirect_http_response_code: 301
The cloudbuild.yaml provides the steps to build and deploy the Jekyll site.
The first step runs chmod to add read/write permissions for all users in the build workspace. The Jekyll container runs as a non-root user by default and the lack of permissions will cause build errors. Almost any container could be used for this purpose so I used the built-in cloud-builders git container.
The second step uses a Jekyll container to run “jekyll build” on the workspace which will populate our _site directory with the static files that App Engine will serve. Assuming there are no issues with your Jekyll configuration we should get a successful build and be ready to publish.
The third and final step is to actually deploy the app to App Engine. We use the built-in cloud-builders gcloud container and run “app deploy” on the workspace directory. This step will also cause App Engine to build a container image for our App Engine instances to use when they are invoked. This usually took around 10 minutes in my testing.
steps:
- id: Update Permissions
name: "gcr.io/cloud-builders/git"
entrypoint: "chmod"
args: ["-v","-R", "a+rw","."]
- id: Build Jekyll Site
name: 'jekyll/jekyll'
args: ['jekyll','build']
- id: Deploy to App Engine
name: "gcr.io/cloud-builders/gcloud"
args: ["app", "deploy"]
timeout: "15m"
We’re now ready to publish our site for the first time. Add the two yaml files we created, commit them, and then push to Cloud Source Repositories.
git add app.yaml
git add cloudbuild.yaml
git commit -m "Add deployment configuration"
git push --set-upstream google master
[master 6f2d752] Add deployment configuration
2 files changed, 32 insertions(+)
create mode 100644 app.yaml
create mode 100644 cloudbuild.yaml
Done
Enumerating objects: 5, done.
Counting objects: 100% (5/5), done.
Delta compression using up to 4 threads
Compressing objects: 100% (4/4), done.
Writing objects: 100% (4/4), 1.05 KiB | 1.05 MiB/s, done.
Total 4 (delta 1), reused 0 (delta 0)
remote: Resolving deltas: 100% (1/1)
To ssh://example.net@source.developers.google.com:2022/p/primal-turbine-268514/r/myawesomeproject
f6b9286..6f2d752 master -> master
Branch 'master' set up to track remote branch 'master' from 'google'.
Assuming everything is working as intended, we can head over to the Cloud Build build history and see our new build running. you’ll also see a second build spawn for the App Engine container image. Once the Cloud Build process is complete our app should appear in App Engine. This can sometimes take a few minutes. Eventually you should be able to visit your App Engine URL (project-id.appspot.com) and see your site.
At this point you have a fully functional site. You’ll probably want to add a custom domain instead of using the default appspot.com domain.
Cloud Build creates a container image every time the app is deployed. We’d need to add another task to clean these up so we don’t incur unnecessary costs.
App Engine leaves a lot of old versions around when you deploy. We should probably also clean up old ones as part of our build process.
If you don’t care about friendly 404 pages and want to completely prevent App Engine from ever spawning an instance and incuring costs, you can replace the “Catch-all” handler with the handler below. I suspect that Google might at some point crack down on App Engine apps that never spawn an instance, but it doesn’t appear they have done so yet.
# Return 404
- url: /.*
static_files: fakepath
upload: _site/.*
require_matching_file: false
secure: always
redirect_http_response_code: 301
While this behavior in Windows is useful for a typical user who may have purchased an unformatted thumb drive, it can be particularly annoying for power users. If you deal a lot with VeraCrypt encrypted partitions or have thumb drives with separate partitions for Windows and Linux you’re probably going to want to prevent them from being wiped out by an accident press of the enter key.
Luckily for us, Windows is set up to ignore certain types of partitions and hide them from the user. We can abuse this functionality by assigning specific partition types to our encrypted or Linux partitions.
The different types have slightly different behavior, so you’ll want to pick the one most suited to your use. VeraCrypt for example, generally doesn’t allow you to select the partitions that Windows hides completely. The tables below show the type IDs that will not prompt for formatting on Windows. Unfortunately, in my testing I was unable to find any MBR partition types that Windows ignores. You’ll need to wipe the disk and format with GPT before attempting any of this.
Name | Code | Notes |
---|---|---|
Microsoft Reserved | E3C9E316-0B5C-4DB8-817D-F92DF00215AE | Requires System attribute set. Linux probably won’t auto-mount. |
Microsoft Recovery | DE94BBA4-06D1-4D40-A16A-BFD50179D6AC | Requires System attribute set. Linux probably won’t auto-mount. |
Storage Spaces Protective | E75CAF8F-F680-4CEE-AFA3-B001E56EFC2D | Doesn’t appear in VeraCrypt. |
Logical Disk Manager Metadata | 5808C8AA-7E8F-42E0-85D2-E1E90434CFB3 | Doesn’t appear in VeraCrypt. Empty volume shows up in Disk Management. |
Logical Disk Manager Data | AF9B60A0-1431-4F62-BC68-3311714A69AD | Doesn’t appear in VeraCrypt. Error volume shows up in Disk Management. |
On Windows, you can use the built-in utility DiskPart to modify most of these partition IDs. Unfortunately, on GPT disks you are not allowed to set the “Microsoft Reserved” or LDM partition types. Trying to set these IDs will return an error. You also won’t be able to set the System attribute. You’ll have to use one of the Linux tools for that.
> diskpart
Microsoft DiskPart version 10.0.17134.1
Copyright (C) Microsoft Corporation.
On computer:
DISKPART> select disk X
Disk X is now the selected disk.
DISKPART> select part X
Partition X is now the selected partition.
DISKPART> setid id=E75CAF8F-F680-4CEE-AFA3-B001E56EFC2D
DiskPart successfully set the partition ID.
On Linux you can use either fdisk or gdisk. The commands are mostly the same, but they differ slightly.
fdisk:
> fdisk /dev/sdx
Welcome to fdisk (util-linux 2.27.1).
Changes will remain in memory only, until you decide to write them.
Be careful before using the write command.
Command (m for help):t
Selected partition 1
Partition type (type L to list all types):E3C9E316-0B5C-4DB8-817D-F92DF00215AE
Changed type of partition 'Linux filesytem' to 'Microsoft Reserved'.
Command (m for help):w
The partition table has been altered.
Calling ioctl() to re-read partition table.
Syncing disks.
gdisk:
> gdisk /dev/sdx
GPT fdisk (gdisk) version 1.0.1
...
Command (? for help): t
Using 1
Current type is 'Linux filesystem'
Hex code or GUID (L to show codes, Enter = 8300):E3C9E316-0B5C-4DB8-817D-F92DF00215AE
Changed type of partition to 'Microsoft Reserved'
Command (? for help):w
Final checks complete. About to write GPT data. THIS WILL OVERWRITE EXISTING
PARTITIONS!!
Do you want to proceed? (Y/N):y
OK; writing new GUID partition table (GPT) to /dev/sdx.
The operation has completed successfully.
If you want to set the system attribute too, you’ll need to run these commands in gdisk:
> gdisk /dev/sdx
GPT fdisk (gdisk) version 1.0.1
...
Command (? for help):x
Expert command (? for help):a
Using 1
...
Attribute value is 0000000000000000. Set fields are:
No fields set
Toggle which attribute field (0-63, 64 or <Enter> to exit):0
Have enabled the 'system partition' attribute.
Attribute value is 0000000000000001. Set fields are:
0 (system partition)
Toggle which attribute field (0-63, 64 or <Enter> to exit):64
Expert command (? for help):m
Command (? for help):w
Final checks complete. About to write GPT data. THIS WILL OVERWRITE EXISTING
PARTITIONS!!
Do you want to proceed? (Y/N):y
OK; writing new GUID partition table (GPT) to /dev/sdx.
The operation has completed successfully.
Once this is complete, you should be able to plug your portable disk into a Windows computer without being prompted to format. Depending on the type ID you chose you may or may not see a drive letter appear.
As always, your mileage may vary as this is an undocumented behavior and there is no guarantee Windows will continue working like this in the future. Exercise caution if your drive contains irrecoverable data.
]]>To summarize my point of view:
If you configure something on behalf of users without telling them and it breaks their workflow, you are the problem.
I’ve seen this issue crop up a few times and it usually winds up with someone losing data and someone in IT pointing the finger at the user and saying “Well you should’ve known better, not my problem”. While I’m not saying that the user isn’t at fault in every case, there are definitely cases where these situations are a failure of IT to do their due diligence.
Let’s say you work in an office that has a clean desk policy due to some strict compliance requirements. Every night at 6pm the cleaning crew comes around, collects any discarded papers off of everyone’s desk, and shreds them. This is non-default behavior for most people. Any reasonable employee onboarding procedure would include training the new employee on this policy, and you’d probably even make them sign something that says they understand.
Now let’s think about a parallel in the IT world. The default behavior of Microsoft Outlook is to only remove items from Deleted Items when the user asks it to. If you set a global company policy that removes items after 7 days, you’ve broken the assumption of users who are familiar with the default behavior. Maybe this user’s preferred workflow is to dump emails from their deleted items into a PST file once a month. Maybe they like to keep Deleted Items around forever because it’s excluded from search by default, but know they can search it explicitly if they need to.
If you fail to train users on “the way we do things here” when they are onboarded, you have zero right to point the finger at them when your policies cause them to have problems. People love to blame companies like Microsoft for pushing out new changes that break their existing automation or configuration policies, but immediately blame the user when they do the same thing to them.
The part where this really starts to get into a grey area is when you have to decide “how much notice is enough notice” for a particular policy or change. People tend to get information overload very easily, so you don’t want to drown out your important updates with random unimportant information.
On the other hand, even simple policies that no one even thinks about like the default calendar sharing in Exchange can become disastrous once somebody finds out their department is getting dissolved because more than free/busy was shared by default.
This post was shamelessly stolen from my reddit post of the same name.
]]>It’s worth noting that this guide only applies to OTP/U2F functionality. You can use the native RDP smartcard redirection to use PIV and GPG functionality without doing any extra work.
Update 2020/12/30 - NO LONGER WORKING
It looks like a recent update or some change in browser behavior has caused U2F to no longer function properly over RemoteFX redirection. The device itself does redirect but the U2F functionality just times out in every browser I’ve tried. I’ll keep playing around with it but if you manage to get it working let me know!
The first thing we’ll need to do on our client computer (the one where the Yubikey physically resides) is make some changes to Group Policy. You can do this via the “Local Group Policy” MMC or if you are domain-joined you can push out the setting with a domain Group Policy Object.
The policy we’re looking for is called “Allow RDP redirection of other supported RemoteFX USB devices from this computer” and is located here in the tree:
Set the policy to “Enabled”. We can set it to either “Administrators and Users” or “Administrators Only” depending on the use-case.
We’ll also need to make some changes to the registry. By default Windows will not list the Yubikey as a device that can be redirected so we need to add it’s USB device ID to the list. I pulled these device IDs from a Yubikey 4 so your mileage may vary using other models. You can use the following registry file to automatically add the required entries.
Windows Registry Editor Version 5.00
[HKEY_LOCAL_MACHINE\SOFTWARE\Policies\Microsoft\Windows NT\Terminal Services\Client\UsbSelectDeviceByInterfaces]
"Yubikey Smartcard"="{50dd5230-ba8a-11d1-bf5d-0000f805f530}"
"Yubikey FIDO"="{745a17a0-74d3-11d0-b6fe-00a0c90f57da}"
To find the ID for other models of Yubikey (or any other device for that matter) look at the “Class Guid” property of the top level device in Device Manger. The easiest way to find the YubiKey is to look under “Smart card readers”. You can find the HID device by switching to “Devices by connection” in the “View” menu and looking for the other entires under the same “USB Composite Device”.
After making these changes I recommend rebooting the client computer, although it may not be strictly necessary.
Similarly to the client computer we will need to update a Group Policy on the server as well. The policy we’re looking for is called “Do not allow supported Plug and Play device redirection” and is located here in the tree:
The naming of this policy is very confusing since it is enabled by default if left unconfigured. We’ll need to set the policy to “Disabled” and then reboot the computer.
Once we’ve done all of the setup the only thing left to do is to start a remote desktop session with device redirection enabled. Go to the “Local Resources” tab of the RDP client settings and click “More…” under “Local devices and resources”. You should now see “Other supported RemoteFX USB devices” with a list of devices. Check the appropriate device and it will be available to you on the remote machine to authenticate with.
If you are using the native smartcard functionality of your Yubikey (PIV or GPG) then those functions will not work while the device is being redirected via RemoteFX. You will have to use the device redirection icon on the connection bar at the top of the screen to switch back and forth between functions.
]]>