Camera Bag - 2023 Edition

I last wrote about my camera bag in 2020, and not a lot has shifted, but probably enough to have an update post about it!

Since my last time writing about this bag, I’ve moved from Android to iOS, and I’ve been shooting with the X100V for a lot longer. The Peak Design 3L Everyday Sling is still the bag of choice, holding just enough for my small kit. Here’s what’s in the bag now:

  • Fujifilm X100V

    • JJC Filter Adapter and hood

    • Tiffen Black Pro Mist 1/4 49mm filter

    • 128GB Samsung EVO Select SD card

    • Peak Design Leash

  • Fujifilm TCL-X100 and WCL-X100

    • B+W MRC Nano Clear filters on each adapter

  • Fujifilm EF-X20 flash w/ case

  • Microfibre cloth

  • 2x Neewer Fuji batteries

  • Business cards

  • Airtag

  • DIY flash gel kit

  • Big Idea Design Titanium Pocket Tool

  • GUS pill fob with some OTC meds (Ibuprofen, benadryl, etc)

  • External straps for the sling

  • 49mm filter holder

    • B+W MRC Nano Clear Filter

    • NiSi True Color Pro Nano circular Polarizer

  • Tom Bihn Ghost Whale Small

    • Mophie 6000mah power bank

    • Anker USB-C to USB-C 3ft

    • Anker USB-C to Lightning 3ft

    • Lightning SD Card Reader

    • Anker Powerport Nano III 30w

  • Tom Bin Ghost Whale Super Mini

    • Ray-Ban Folding Wayfarers

This setup basically affords me everything I need to shoot and edit remotely for an indefinite amount of time, and also doubles as a day-bag and a tech-bag while travelling with photography in mind. Lightroom Mobile and Adobe cloud really are game changers when it comes to keeping it light, and modern phones with powerful and efficient processors and great screens make it very easy to get solid basic edits while in the field.

I’ve got more tweaks to make (67mm filters anyone?), but for the time being I’ve been pretty content with the kit!

Home Server 2023: Secondary Node

My secondary DNS server was running on a Pi 3B+, however I really found doing a docker pull was taking a while to download and extract due to the limited hardware specs. In some cases I was seeing 10+ minute extract times… I decided to take advantage of another refurbished PC to replace the Pi, and managed an HP EliteDesk 800 G2 mini desktop for under $200 shipped with the below specs:

  • Intel Core i5 6400T

  • 16GB DDR4 RAM

  • 256GB SATA SSD

It’s cool, it’s quiet, it’s compact, and it’s got a load of RAM for whatever it ends up getting used for other than secondary DNS. For the time being I’ve put Proxmox on it and had my secondary DNS server setup in with Docker in about half an hour. I may consider trying out Frigate on this with Intel GPU acceleration, and I will likely also use this node for a lot of dev stuff I want to try out rather than doing it on the main host.

Home Server 2023: RAM and Storage Upgrades

I suppose this was all planned, however I didn’t really expect it to come up so quickly. Such is life when you come across some sweet deals though! I ended up adding a 16GB kit of DDR4 bringing my RAM to 32GB which should give me some room for more VMs/containers when it comes up, but more importantly, more room for 4K transcode cache. It’s still undecided if this makes a big difference, but updating the MediaStack container to 16GB of available memory lets me map about half of that to a transcode directory. All in the RAM came to under $50 shipped.

22TB for less than $400? I regret not getting two!

The other upgrade added was 22TB of spinning disk storage courtesy of Western Digital. I grabbed one of the 22TB Western Digital My Books for a ridiculously cheap $360 shipped due to a coupon stacking error. After verifying the drive was functional I shucked it and ended up with a 22TB white label enterprise class 7200RPM helium-filled drive. It found it’s way very quickly into the server and after a bit of fiddling around was mounted and accessible. My only issue was how do I present two drives to Plex at the same time in a seamless way? Mapping libraries in different drives is a pain, and the wait time for a fresh file being copied from the SSD to the HDD isn’t a solid user experience. On top of that copies could introduce I/O overhead, possibly leading to stuttering, again, another user experience no-go. I decided to go with MergerFS.

MergerFS is a union file system that can present multiple mounts as a single mount point to software. It handles writing data to different drives in different methods. In my case, I’ve got it setup to present both the SSD and the HDD as a single mount point, and MergerFS writes all data to it’s “first found” drive. With my configuration First Found is the SSD. I setup a cron job that runs daily at 5AM to move files older than 14 days off the SSD and onto the HDD, then cleans up empty directories. This should accomplish a few things:

  • New content is downloaded and served off fast storage.

  • Content being downloaded while content is being streamed should have minimal impact to the streaming experience due to the high IOPS of an SSD.

  • Older content is moved automatically during low/no use times to ensure no user experience impact.

  • The SSD is regularly cleaned keeping storage usage there to a minimum, allowing me to use it for other things like serving up my Lightroom back catalog over the network.

  • A single disk loss doesn’t mean all data on the other disks is lost. This is better than a JBOD but not as good as RAID, which is perfectly fine for a media server.

So far this has been working great. Initial copy of back catalog content saw an average write speed just over 200mb/s, which is awesome for a spinning disk. Content streamed from the spinning disk is served relatively quick with no noticeable user impact. The move script has been executing daily without issue as well. Looking forward to seeing how long it takes to chew through this much space!

Home Server 2023: VPN

My home network has some pretty sweet features! It’s got my files, my services, whole home ad-blocking, and DNS over HTTPS. I love these features when I’m at home, so why not get them while I’m out too? I accomplished all this with Cloudflare DNS and PiVPN.

PiVPN and Wireguard - a lightweight, easy to configure setup!

PiVPN is a script for setting up a lightweight, hardened and automatically updating VPN environment on either OpenVPN or Wireguard. It’s designed to be run on a Raspberry Pi and PiOS, but since it’s basically just Debian I ended up using it and throwing it onto an unprivileged Debian container. I used Wireguard since it’s lightweight and easy to setup, and went through the install really quickly with the below command:

curl -L https://install.pivpn.io | bash 

I followed the prompts on screen to setup and configure for Wireguard, included my URL for VPN access, and kept most of the rest as defaults. Once I finished that I grabbed a Cloudflare DDNS updater script:

git clone https://github.com/K0p1-Git/cloudflare-ddns-updater.git

Configured it quickly with my URL and API key, then setup a cron job to run every 5 minutes.

I added a config file to PiVPN for my iPhone, installed the app, scanned the QR code, and configured the VPN to connect automatically whenever not connected to my home WiFi SSID. The result? Seamless VPN connectivity when I’m away from home and an experience that’s just like I’m on my home network - DNS based ad-blocking, DNS over HTTPS, and access to all my local services! Speeds are basically line speed - Easily up to 1gbps. I setup configs for my iPad and Macbook as well, so they’re ready to go when I am. Overall this was an incredibly easy setup and the benefits are astounding. The fact that it “just works” as well is great and really drives home the user experience focus of what I strive to achieve with my home network.

Home Server 2023: Photo Backups

I’ve got a good terabyte of RAW photos I’m sitting on. They’ve had various homes throughout the years, from spinning disks, to SATA SSDs, to most recently a 2TB Samsung T5 USB SSD. I discussed my photo workflow a bit in this post here, but I didn’t detail how I kept these images backed up. For the longest while I’ve been relying on Amazon cloud photos running on my desktop to keep things synced to an online source, and with unlimited RAW storage for Prime members, it was a no-brainer. With the move over to the Macbook Pro and my storage sitting on a non-internal drive, I haven’t been running that. So I want to get started again on the backups now that I have the home server up and running, and I’m going to be trying to follow the 3-2-1 rule for data backups of my Lightroom classic catalog:

The basis here is you should create three separate copies of your data - A working copy, and two backups, with one of those backups located off site. I’ll be doing things slightly differently but still keeping things in the spirit of the rule:

  • My working copy will stay on the Samsung T5 portable SSD

  • I’ll be storing two backups on the home server, one to the 4TB SSD, and a copy from that made on the soon to be had 22TB Western Digital hard drive that will be in within a week or two.

  • I’ll be doing a daily backup from the server to a cloud storage provider like Backblaze or Wasabi.

For the Macbook to home server backups, I’ve created an unprivileged Debian container running Samba, and threw a mount point onto the 4TB SSD. I’ve also built a shell script to rclone the RAW file directory to the Samba share if my Macbook is A: docked and B: has the external SSD attached. The shell script is also setup to push a notification to me via Pushover on a successful backup with the details of the backup included:

function push {
    curl -s -F "token=APPTOKENHERE" \
    -F "user=USERTOKENHERE" \
    -F "title=Photo Library Backup Notification" \
    -F "message=$1" https://api.pushover.net/1/messages.json
}

MOUNTCHECK=$(df | awk '' | grep "/Volumes/External SSD")
ETHERNETCHECK=$(networksetup -listallhardwareports | grep "Thunderbolt Ethernet Slot 2")
TIMESTAMP=`date "+%Y/%m/%d %H:%M:%S"`
if [[ -z $MOUNTCHECK || -z $ETHERNETCHECK ]]; then
    echo "$TIMESTAMP INFO : One or more dependencies not available, not attempting backup." >> /path/to/log.txt 2>&1
    if [[ -z $MOUNTCHECK ]]; then
        echo "$TIMESTAMP INFO : The SSD is not mounted to /Volumes/External SSD." >> /path/to/log.txt 2>&1
    fi
    if [[ -z $ETHERNETCHECK ]]; then
        echo "$TIMESTAMP INFO : The computer is not docked." >> /path/to/log.txt 2>&1
    fi
    exit
else
    echo "$TIMESTAMP INFO : Depdendencies available, backing up photos to SMB share..." >> /path/to/log.txt 2>&1
    rclone copy --local-no-set-modtime --log-file=/path/to/log.txt 2>&1 --log-level INFO "/Volumes/External SSD/LightRoomPhotos" "photos:photos"
    OUT=$(tail -n 6 /path/to/log.txt | tr -s '[:blank:]' | tr "\t" " " )
    push "$OUT"
    exit
fi

I can pretty easily get this running daily at 2AM or whenever I decide it’ll be running by using launchd on MacOS. Considering I really only have this drive connected when I’m ingesting content from Creative Cloud into Lightroom Classic, I can just be sure to leave it connected overnight the day I do and the backup should happen. Logging is also setup so I can review as necessary.

Once it hits my Samba share, I’ll have a cron job on the Debian container running daily to rclone the directory over to the 22TB drive, and also schedule a currently undecided backup solution to run a backup on the SSD data to sync up to either Wasabi or Backblaze B2. I’m considering Restic for my backup option here, but will be exploring the options available before jumping.

Overall I think this will give me an advantage of being able to work off relatively fast local storage when I need to touch my Lightroom back catalog, while also maintaining a number of backups on my server and on the cloud. I’m also not tied to a network drive for working off my Lightroom back catalog in this case too!

It was a lot of fun learning more about zsh, how to use the Pushover API via curl, and getting all these little things working like rclone and Samba. It’s been a while since I dug in deep with this at home and it’s refreshing, and I really needed the refresher too!

Home Server 2023: DNS Infrastructure

It’s been a while since posting anything overly technical, so I figured I’d drop a couple posts over the next few weeks about the new addition to the basement, a home server.

It’s been a while since I’ve had a home server, but dabbling in HomeAssistant desires to cut down on our streaming services led me back down the rabbit hole of building an all in one box for home use. I was able to snag an HP Elite Desk 800 G3 Small Form Factor about a month ago for $360 shipped with the below specs:

  • Intel Core i7 7700

  • 16GB DDR4

  • 256GB SSD

I ditched the 256GB SATA SSD and threw in a 512GB NVMe drive along with a 4TB Samsung 870 EVO SATA drive that we picked up for very little money, and threw Proxmox on it. The rest has been a lot of fun figuring out various technologies and working to build something that “just works”. My first fun project was redundant whole home ad-blocking with a few pi-hole instances! I run one in a Debian container on the Proxmox host itself and the other is running on my Raspberry Pi 3b+ that was previously used for my Home Assistant install.

The above diagram is a quick overview of how the install works currently. Some notes:

  • All Docker containers are running on non-root UIDs and GIDs.

  • Diun and the Docker Socket Proxy are on their own network, and the proxy is only exposing the required parts of the socket to Diun.

  • Diun notifies of image updates by throwing a notification to Discord every Monday at 10AM.

  • The primary and secondary Pi-holes are setup with Gravity Sync to ensure allow and block lists along with local DNS is kept in sync between the two nodes.

  • Duplicati is backing up the config directories for the containers on host daily to OneDrive.

  • Locally on the primary Pi-hole container a cron job is run to update the allow list daily for URLs that are known to be safe.

  • The Pi-Holes are subscribed to a number of block lists to ensure a wide swath of trackers, malicious content providers, ads, and suspicious content is blocked.

The whole thing is very easy to tear down and spin up quickly, and updates are all completed with a quick docker compose pull && docker compose up -d. I can update on a per-node basis too, staggering things to ensure there’s no ill effects.

A couple of things I want to switch out though:

  • I want to replace Duplicati with Restic or something similar running on the hosts. I’ve read some horror stories of Duplicati restores.

  • I’d also like to do daily backups to OneDrive and monthly backups to Backblaze B2 or Wasabi, once I get that up and running for my photo backups.

  • I’ve recently moved to Pushover for push notifications, so I’d also like to move from Discord to Pushover for my Diun notifications. It’s fairly trivial to tie Pushover into shell scripts as well (I’ll detail this in a future post), so any job running that’s deemed important I’d like to integrate push notifications into.

Overall so far I’ve been pretty happy with the performance of the Pi-hole stacks and getting more familiar with Docker has shown me just how powerful the platform can be for building out solutions quickly. It’s also great to get user feedback when they start noticing there’s no more ads in their Android games!