<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:atom="http://www.w3.org/2005/Atom" version="2.0">
    <channel>
      <title>escapades in engineering</title>
      <link>https://kcirtapfromspace.github.io/kcirtap-blog</link>
      <description>eventually correct with Patrick Deutsch</description>
      <generator>Zola</generator>
      <language>en</language>
      <atom:link href="https://kcirtapfromspace.github.io/kcirtap-blog/rss.xml" rel="self" type="application/rss+xml"/>
      <lastBuildDate>Thu, 19 Feb 2026 00:00:00 -0700</lastBuildDate>
      <item>
          <title>Ukodus: Building a Sudoku Galaxy in Rust and WebAssembly</title>
          <pubDate>Thu, 19 Feb 2026 00:00:00 -0700</pubDate>
          <author>Unknown</author>
          <link>https://kcirtapfromspace.github.io/kcirtap-blog/posts/ukodus-sudoku-galaxy/</link>
          <guid>https://kcirtapfromspace.github.io/kcirtap-blog/posts/ukodus-sudoku-galaxy/</guid>
          <description xml:base="https://kcirtapfromspace.github.io/kcirtap-blog/posts/ukodus-sudoku-galaxy/">&lt;h1 id=&quot;ukodus-building-a-sudoku-galaxy-in-rust-and-webassembly&quot;&gt;Ukodus: Building a Sudoku Galaxy in Rust and WebAssembly&lt;&#x2F;h1&gt;
&lt;p&gt;Every Sudoku app on iOS is annoying. Ads after every puzzle. Subscriptions to unlock “hard” mode. Hints that just fill in the answer without teaching you anything. I got tired of it, so I did what any reasonable person would do: spent months building an entire Sudoku platform from scratch.&lt;&#x2F;p&gt;
&lt;p&gt;The result is &lt;a href=&quot;https:&#x2F;&#x2F;ukodus.now&quot;&gt;Ukodus&lt;&#x2F;a&gt; — a Rust-powered Sudoku engine that runs natively on iOS, compiles to WebAssembly for the browser, and includes a force-directed galaxy visualization where every puzzle ever played becomes a star. Because apparently I can’t just solve a problem without also building a cosmos around it.&lt;&#x2F;p&gt;
&lt;h2 id=&quot;the-engine-45-techniques-in-rust&quot;&gt;The Engine: 45 Techniques in Rust&lt;&#x2F;h2&gt;
&lt;p&gt;The core of Ukodus is a Sudoku engine written in Rust. It implements 45 human-style solving techniques organized into 10 families:&lt;&#x2F;p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;Singles&lt;&#x2F;strong&gt; — Hidden Single, Naked Single (the basics)&lt;&#x2F;li&gt;
&lt;li&gt;&lt;strong&gt;Pairs &amp;amp; Triples&lt;&#x2F;strong&gt; — Naked Pair through Hidden Quad&lt;&#x2F;li&gt;
&lt;li&gt;&lt;strong&gt;Intersections&lt;&#x2F;strong&gt; — Pointing Pairs, Box&#x2F;Line Reduction&lt;&#x2F;li&gt;
&lt;li&gt;&lt;strong&gt;Fish&lt;&#x2F;strong&gt; — X-Wing, Swordfish, Jellyfish, plus finned and mutant variants&lt;&#x2F;li&gt;
&lt;li&gt;&lt;strong&gt;Wings&lt;&#x2F;strong&gt; — XY-Wing, XYZ-Wing, W-Wing, WXYZ-Wing&lt;&#x2F;li&gt;
&lt;li&gt;&lt;strong&gt;Chains&lt;&#x2F;strong&gt; — X-Chain, 3D Medusa, AIC&lt;&#x2F;li&gt;
&lt;li&gt;&lt;strong&gt;Rectangles&lt;&#x2F;strong&gt; — Unique Rectangles (6 types), Hidden Rectangle, Empty Rectangle&lt;&#x2F;li&gt;
&lt;li&gt;&lt;strong&gt;ALS&lt;&#x2F;strong&gt; — Almost Locked Sets: ALS-XZ, ALS-XY-Wing, ALS Chain&lt;&#x2F;li&gt;
&lt;li&gt;&lt;strong&gt;Forcing&lt;&#x2F;strong&gt; — Nishio, Cell&#x2F;Region Forcing Chains, Dynamic Forcing Chains&lt;&#x2F;li&gt;
&lt;li&gt;&lt;strong&gt;Other&lt;&#x2F;strong&gt; — Sue de Coq, Aligned Pair Exclusion, Death Blossom, BUG+1, Backtracking&lt;&#x2F;li&gt;
&lt;&#x2F;ul&gt;
&lt;p&gt;Each technique has a Sudoku Explainer (SE) difficulty rating. Hidden Single is 1.5 (beginner territory), Dynamic Forcing Chain is 9.3 (you need a PhD or a lot of patience), and Backtracking sits at 11.0 as the last resort. The engine uses these ratings to classify every puzzle into difficulty tiers: Beginner, Easy, Medium, Intermediate, Hard, Expert, Master, and Extreme.&lt;&#x2F;p&gt;
&lt;p&gt;Why does this matter? Because most Sudoku apps rate puzzles by counting givens or using some vague internal metric. SE ratings map directly to the hardest technique you’d need to solve the puzzle. A puzzle rated 3.2 means you’ll need X-Wings. A 7.5 means ALS Chains or Nishio. You know exactly what you’re getting into.&lt;&#x2F;p&gt;
&lt;p&gt;The engine also generates puzzles with guaranteed unique solutions, rates them on generation, and encodes them as 8-character short codes for sharing. The same short code works across web, iOS, and terminal.&lt;&#x2F;p&gt;
&lt;h2 id=&quot;wasm-for-the-browser&quot;&gt;WASM for the Browser&lt;&#x2F;h2&gt;
&lt;p&gt;The Rust engine compiles to WebAssembly via &lt;code&gt;wasm-pack&lt;&#x2F;code&gt;. The output is a 554KB &lt;code&gt;.wasm&lt;&#x2F;code&gt; binary and a JS glue module that exposes the &lt;code&gt;SudokuGame&lt;&#x2F;code&gt; class. The game renders to an HTML &lt;code&gt;&amp;lt;canvas&amp;gt;&lt;&#x2F;code&gt; — the Rust side owns all the drawing logic, which means the rendering is identical regardless of platform.&lt;&#x2F;p&gt;
&lt;p&gt;Loading WASM in a SvelteKit app requires some care. You can’t let Vite try to bundle the WASM module at build time, so the loader uses a dynamic import with a &lt;code&gt;@vite-ignore&lt;&#x2F;code&gt; pragma:&lt;&#x2F;p&gt;
&lt;pre data-lang=&quot;typescript&quot; style=&quot;background-color:#2b303b;color:#c0c5ce;&quot; class=&quot;language-typescript &quot;&gt;&lt;code class=&quot;language-typescript&quot; data-lang=&quot;typescript&quot;&gt;&lt;span style=&quot;color:#b48ead;&quot;&gt;const &lt;&#x2F;span&gt;&lt;span style=&quot;color:#bf616a;&quot;&gt;wasmJsPath &lt;&#x2F;span&gt;&lt;span&gt;= &amp;#39;&lt;&#x2F;span&gt;&lt;span style=&quot;color:#a3be8c;&quot;&gt;&#x2F;wasm&#x2F;sudoku_wasm.js&lt;&#x2F;span&gt;&lt;span&gt;&amp;#39;;
&lt;&#x2F;span&gt;&lt;span style=&quot;color:#b48ead;&quot;&gt;const &lt;&#x2F;span&gt;&lt;span style=&quot;color:#bf616a;&quot;&gt;mod &lt;&#x2F;span&gt;&lt;span&gt;= &lt;&#x2F;span&gt;&lt;span style=&quot;color:#b48ead;&quot;&gt;await &lt;&#x2F;span&gt;&lt;span style=&quot;color:#8fa1b3;&quot;&gt;import&lt;&#x2F;span&gt;&lt;span&gt;(&lt;&#x2F;span&gt;&lt;span style=&quot;color:#65737e;&quot;&gt;&#x2F;* @vite-ignore *&#x2F; &lt;&#x2F;span&gt;&lt;span style=&quot;color:#bf616a;&quot;&gt;wasmJsPath&lt;&#x2F;span&gt;&lt;span&gt;);
&lt;&#x2F;span&gt;&lt;span style=&quot;color:#b48ead;&quot;&gt;await &lt;&#x2F;span&gt;&lt;span style=&quot;color:#bf616a;&quot;&gt;mod&lt;&#x2F;span&gt;&lt;span&gt;.&lt;&#x2F;span&gt;&lt;span style=&quot;color:#8fa1b3;&quot;&gt;default&lt;&#x2F;span&gt;&lt;span&gt;({
&lt;&#x2F;span&gt;&lt;span&gt;  module_or_path: new URL(&amp;#39;&lt;&#x2F;span&gt;&lt;span style=&quot;color:#a3be8c;&quot;&gt;&#x2F;wasm&#x2F;sudoku_wasm_bg.wasm&lt;&#x2F;span&gt;&lt;span&gt;&amp;#39;, window.location.&lt;&#x2F;span&gt;&lt;span style=&quot;color:#bf616a;&quot;&gt;origin&lt;&#x2F;span&gt;&lt;span&gt;)
&lt;&#x2F;span&gt;&lt;span&gt;});
&lt;&#x2F;span&gt;&lt;&#x2F;code&gt;&lt;&#x2F;pre&gt;
&lt;p&gt;The WASM files live in &lt;code&gt;static&#x2F;wasm&#x2F;&lt;&#x2F;code&gt; and get served as plain static assets. No Vite WASM plugin, no special bundler config. The &lt;code&gt;&#x2F;play&#x2F;&lt;&#x2F;code&gt; route sets &lt;code&gt;ssr = false&lt;&#x2F;code&gt; so SvelteKit generates a minimal HTML shell that hydrates client-side — the WASM needs a browser environment with a canvas, so server-side rendering would just blow up.&lt;&#x2F;p&gt;
&lt;p&gt;The game loop is a standard &lt;code&gt;requestAnimationFrame&lt;&#x2F;code&gt; cycle. Every frame calls &lt;code&gt;game.tick()&lt;&#x2F;code&gt; on the WASM side, which handles input processing, animation, and canvas rendering. Keyboard events get forwarded from the Svelte component to the WASM engine via &lt;code&gt;game.handle_key(event)&lt;&#x2F;code&gt;. The engine supports vim-style navigation (&lt;code&gt;hjkl&lt;&#x2F;code&gt;), arrow keys, and WASD — because every good application should support at least three ways to move a cursor.&lt;&#x2F;p&gt;
&lt;h2 id=&quot;the-galaxy&quot;&gt;The Galaxy&lt;&#x2F;h2&gt;
&lt;p&gt;Here’s where things got a little unhinged.&lt;&#x2F;p&gt;
&lt;p&gt;I wanted a way to visualize all the puzzles that had been played. A leaderboard felt boring. A list felt worse. Then I thought: what if every puzzle was a star, and puzzles with similar techniques formed constellations?&lt;&#x2F;p&gt;
&lt;p&gt;The Galaxy page uses D3’s &lt;code&gt;forceSimulation&lt;&#x2F;code&gt; to create a force-directed graph. Each node is a puzzle, colored by difficulty tier (green for Beginner through near-black for Extreme). Node size scales with play count. Edges connect puzzles that share solving techniques, so similar puzzles cluster together.&lt;&#x2F;p&gt;
&lt;p&gt;The fun part is the convex hulls. D3 computes &lt;code&gt;polygonHull&lt;&#x2F;code&gt; for each technique family and draws translucent overlays around them, so you can see the Fish cluster, the Wings cluster, the Chains cluster. It looks like a star map. Which is the point.&lt;&#x2F;p&gt;
&lt;pre data-lang=&quot;typescript&quot; style=&quot;background-color:#2b303b;color:#c0c5ce;&quot; class=&quot;language-typescript &quot;&gt;&lt;code class=&quot;language-typescript&quot; data-lang=&quot;typescript&quot;&gt;&lt;span style=&quot;color:#bf616a;&quot;&gt;simulation &lt;&#x2F;span&gt;&lt;span&gt;= &lt;&#x2F;span&gt;&lt;span style=&quot;color:#bf616a;&quot;&gt;d3
&lt;&#x2F;span&gt;&lt;span&gt;  .&lt;&#x2F;span&gt;&lt;span style=&quot;color:#8fa1b3;&quot;&gt;forceSimulation&lt;&#x2F;span&gt;&lt;span&gt;&amp;lt;GalaxyNode&amp;gt;(&lt;&#x2F;span&gt;&lt;span style=&quot;color:#bf616a;&quot;&gt;nodes&lt;&#x2F;span&gt;&lt;span&gt;)
&lt;&#x2F;span&gt;&lt;span&gt;  .&lt;&#x2F;span&gt;&lt;span style=&quot;color:#8fa1b3;&quot;&gt;force&lt;&#x2F;span&gt;&lt;span&gt;(&amp;#39;&lt;&#x2F;span&gt;&lt;span style=&quot;color:#a3be8c;&quot;&gt;link&lt;&#x2F;span&gt;&lt;span&gt;&amp;#39;, &lt;&#x2F;span&gt;&lt;span style=&quot;color:#bf616a;&quot;&gt;d3&lt;&#x2F;span&gt;&lt;span&gt;.&lt;&#x2F;span&gt;&lt;span style=&quot;color:#8fa1b3;&quot;&gt;forceLink&lt;&#x2F;span&gt;&lt;span&gt;&amp;lt;GalaxyNode, GalaxyEdge&amp;gt;(&lt;&#x2F;span&gt;&lt;span style=&quot;color:#bf616a;&quot;&gt;edges&lt;&#x2F;span&gt;&lt;span&gt;)
&lt;&#x2F;span&gt;&lt;span&gt;    .&lt;&#x2F;span&gt;&lt;span style=&quot;color:#8fa1b3;&quot;&gt;id&lt;&#x2F;span&gt;&lt;span&gt;(&lt;&#x2F;span&gt;&lt;span style=&quot;color:#bf616a;&quot;&gt;d &lt;&#x2F;span&gt;&lt;span style=&quot;color:#b48ead;&quot;&gt;=&amp;gt; &lt;&#x2F;span&gt;&lt;span style=&quot;color:#bf616a;&quot;&gt;d&lt;&#x2F;span&gt;&lt;span&gt;.id).&lt;&#x2F;span&gt;&lt;span style=&quot;color:#8fa1b3;&quot;&gt;distance&lt;&#x2F;span&gt;&lt;span&gt;(&lt;&#x2F;span&gt;&lt;span style=&quot;color:#d08770;&quot;&gt;60&lt;&#x2F;span&gt;&lt;span&gt;).&lt;&#x2F;span&gt;&lt;span style=&quot;color:#8fa1b3;&quot;&gt;strength&lt;&#x2F;span&gt;&lt;span&gt;(&lt;&#x2F;span&gt;&lt;span style=&quot;color:#d08770;&quot;&gt;0.3&lt;&#x2F;span&gt;&lt;span&gt;))
&lt;&#x2F;span&gt;&lt;span&gt;  .&lt;&#x2F;span&gt;&lt;span style=&quot;color:#8fa1b3;&quot;&gt;force&lt;&#x2F;span&gt;&lt;span&gt;(&amp;#39;&lt;&#x2F;span&gt;&lt;span style=&quot;color:#a3be8c;&quot;&gt;charge&lt;&#x2F;span&gt;&lt;span&gt;&amp;#39;, &lt;&#x2F;span&gt;&lt;span style=&quot;color:#bf616a;&quot;&gt;d3&lt;&#x2F;span&gt;&lt;span&gt;.&lt;&#x2F;span&gt;&lt;span style=&quot;color:#8fa1b3;&quot;&gt;forceManyBody&lt;&#x2F;span&gt;&lt;span&gt;().&lt;&#x2F;span&gt;&lt;span style=&quot;color:#8fa1b3;&quot;&gt;strength&lt;&#x2F;span&gt;&lt;span&gt;(-&lt;&#x2F;span&gt;&lt;span style=&quot;color:#d08770;&quot;&gt;80&lt;&#x2F;span&gt;&lt;span&gt;))
&lt;&#x2F;span&gt;&lt;span&gt;  .&lt;&#x2F;span&gt;&lt;span style=&quot;color:#8fa1b3;&quot;&gt;force&lt;&#x2F;span&gt;&lt;span&gt;(&amp;#39;&lt;&#x2F;span&gt;&lt;span style=&quot;color:#a3be8c;&quot;&gt;center&lt;&#x2F;span&gt;&lt;span&gt;&amp;#39;, &lt;&#x2F;span&gt;&lt;span style=&quot;color:#bf616a;&quot;&gt;d3&lt;&#x2F;span&gt;&lt;span&gt;.&lt;&#x2F;span&gt;&lt;span style=&quot;color:#8fa1b3;&quot;&gt;forceCenter&lt;&#x2F;span&gt;&lt;span&gt;(&lt;&#x2F;span&gt;&lt;span style=&quot;color:#bf616a;&quot;&gt;width &lt;&#x2F;span&gt;&lt;span&gt;&#x2F; &lt;&#x2F;span&gt;&lt;span style=&quot;color:#d08770;&quot;&gt;2&lt;&#x2F;span&gt;&lt;span&gt;, &lt;&#x2F;span&gt;&lt;span style=&quot;color:#bf616a;&quot;&gt;height &lt;&#x2F;span&gt;&lt;span&gt;&#x2F; &lt;&#x2F;span&gt;&lt;span style=&quot;color:#d08770;&quot;&gt;2&lt;&#x2F;span&gt;&lt;span&gt;))
&lt;&#x2F;span&gt;&lt;span&gt;  .&lt;&#x2F;span&gt;&lt;span style=&quot;color:#8fa1b3;&quot;&gt;force&lt;&#x2F;span&gt;&lt;span&gt;(&amp;#39;&lt;&#x2F;span&gt;&lt;span style=&quot;color:#a3be8c;&quot;&gt;collide&lt;&#x2F;span&gt;&lt;span&gt;&amp;#39;, &lt;&#x2F;span&gt;&lt;span style=&quot;color:#bf616a;&quot;&gt;d3&lt;&#x2F;span&gt;&lt;span&gt;.&lt;&#x2F;span&gt;&lt;span style=&quot;color:#8fa1b3;&quot;&gt;forceCollide&lt;&#x2F;span&gt;&lt;span&gt;&amp;lt;GalaxyNode&amp;gt;()
&lt;&#x2F;span&gt;&lt;span&gt;    .&lt;&#x2F;span&gt;&lt;span style=&quot;color:#8fa1b3;&quot;&gt;radius&lt;&#x2F;span&gt;&lt;span&gt;(&lt;&#x2F;span&gt;&lt;span style=&quot;color:#bf616a;&quot;&gt;d &lt;&#x2F;span&gt;&lt;span style=&quot;color:#b48ead;&quot;&gt;=&amp;gt; &lt;&#x2F;span&gt;&lt;span style=&quot;color:#8fa1b3;&quot;&gt;nodeRadius&lt;&#x2F;span&gt;&lt;span&gt;(&lt;&#x2F;span&gt;&lt;span style=&quot;color:#bf616a;&quot;&gt;d&lt;&#x2F;span&gt;&lt;span&gt;) + &lt;&#x2F;span&gt;&lt;span style=&quot;color:#d08770;&quot;&gt;2&lt;&#x2F;span&gt;&lt;span&gt;))
&lt;&#x2F;span&gt;&lt;span&gt;  .&lt;&#x2F;span&gt;&lt;span style=&quot;color:#8fa1b3;&quot;&gt;alphaDecay&lt;&#x2F;span&gt;&lt;span&gt;(&lt;&#x2F;span&gt;&lt;span style=&quot;color:#d08770;&quot;&gt;0.02&lt;&#x2F;span&gt;&lt;span&gt;)
&lt;&#x2F;span&gt;&lt;span&gt;  .&lt;&#x2F;span&gt;&lt;span style=&quot;color:#8fa1b3;&quot;&gt;on&lt;&#x2F;span&gt;&lt;span&gt;(&amp;#39;&lt;&#x2F;span&gt;&lt;span style=&quot;color:#a3be8c;&quot;&gt;tick&lt;&#x2F;span&gt;&lt;span&gt;&amp;#39;, &lt;&#x2F;span&gt;&lt;span style=&quot;color:#bf616a;&quot;&gt;ticked&lt;&#x2F;span&gt;&lt;span&gt;);
&lt;&#x2F;span&gt;&lt;&#x2F;code&gt;&lt;&#x2F;pre&gt;
&lt;p&gt;The Galaxy also has a live component. When someone completes a puzzle, a WebSocket message pushes the new node into the simulation in real-time. You can literally watch the galaxy grow. The WebSocket connects to &lt;code&gt;&#x2F;api&#x2F;v1&#x2F;ws&#x2F;galaxy&lt;&#x2F;code&gt; through an Nginx proxy that keeps the connection alive for up to 24 hours:&lt;&#x2F;p&gt;
&lt;pre data-lang=&quot;nginx&quot; style=&quot;background-color:#2b303b;color:#c0c5ce;&quot; class=&quot;language-nginx &quot;&gt;&lt;code class=&quot;language-nginx&quot; data-lang=&quot;nginx&quot;&gt;&lt;span&gt;location &#x2F;api&#x2F;v1&#x2F;ws&#x2F; {
&lt;&#x2F;span&gt;&lt;span&gt;    proxy_pass $api_upstream;
&lt;&#x2F;span&gt;&lt;span&gt;    proxy_http_version 1.1;
&lt;&#x2F;span&gt;&lt;span&gt;    proxy_set_header Upgrade $http_upgrade;
&lt;&#x2F;span&gt;&lt;span&gt;    proxy_set_header Connection &amp;quot;upgrade&amp;quot;;
&lt;&#x2F;span&gt;&lt;span&gt;    proxy_read_timeout 86400s;
&lt;&#x2F;span&gt;&lt;span&gt;}
&lt;&#x2F;span&gt;&lt;&#x2F;code&gt;&lt;&#x2F;pre&gt;
&lt;p&gt;There’s also a “secrets” system. By default, you only see 22 of the 45 techniques and 6 of the 10 families. The advanced families — Chains, ALS, Forcing, and Other — are hidden until you unlock them by completing harder puzzles. When you unlock secrets, the Galaxy reveals entire new constellations that were invisible before. It’s my favorite feature and probably the most unnecessary one.&lt;&#x2F;p&gt;
&lt;h2 id=&quot;one-engine-many-targets&quot;&gt;One Engine, Many Targets&lt;&#x2F;h2&gt;
&lt;p&gt;The beauty of writing the core in Rust is that the same engine works everywhere:&lt;&#x2F;p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;Browser&lt;&#x2F;strong&gt;: Compiled to WASM, loaded in SvelteKit, renders to canvas&lt;&#x2F;li&gt;
&lt;li&gt;&lt;strong&gt;iOS&lt;&#x2F;strong&gt;: Compiled natively via Xcode, same Rust core, native UI shell&lt;&#x2F;li&gt;
&lt;li&gt;&lt;strong&gt;Terminal&lt;&#x2F;strong&gt;: Rust binary with a TUI interface&lt;&#x2F;li&gt;
&lt;li&gt;&lt;strong&gt;Shared codes&lt;&#x2F;strong&gt;: 8-character short codes and 81-character puzzle strings work across all platforms&lt;&#x2F;li&gt;
&lt;&#x2F;ul&gt;
&lt;p&gt;You can start a puzzle on iOS, share the code, and your friend can play the exact same puzzle in a browser. The engine deterministically generates the same puzzle from the same seed, so there’s no server round-trip needed to decode a shared puzzle.&lt;&#x2F;p&gt;
&lt;h2 id=&quot;sveltekit-5-runes-and-static-generation&quot;&gt;SvelteKit 5: Runes and Static Generation&lt;&#x2F;h2&gt;
&lt;p&gt;The web frontend is SvelteKit 5 (Svelte 5.49) with TypeScript. The entire state management layer uses Svelte 5 runes — class-based stores with &lt;code&gt;$state&lt;&#x2F;code&gt;, &lt;code&gt;$derived&lt;&#x2F;code&gt;, and &lt;code&gt;$effect&lt;&#x2F;code&gt; in &lt;code&gt;.svelte.ts&lt;&#x2F;code&gt; files:&lt;&#x2F;p&gt;
&lt;pre data-lang=&quot;typescript&quot; style=&quot;background-color:#2b303b;color:#c0c5ce;&quot; class=&quot;language-typescript &quot;&gt;&lt;code class=&quot;language-typescript&quot; data-lang=&quot;typescript&quot;&gt;&lt;span style=&quot;color:#b48ead;&quot;&gt;class &lt;&#x2F;span&gt;&lt;span style=&quot;color:#ebcb8b;&quot;&gt;PlayerStore &lt;&#x2F;span&gt;&lt;span style=&quot;color:#eff1f5;&quot;&gt;{
&lt;&#x2F;span&gt;&lt;span style=&quot;color:#eff1f5;&quot;&gt;  &lt;&#x2F;span&gt;&lt;span style=&quot;color:#bf616a;&quot;&gt;id &lt;&#x2F;span&gt;&lt;span&gt;= &lt;&#x2F;span&gt;&lt;span style=&quot;color:#8fa1b3;&quot;&gt;$state&lt;&#x2F;span&gt;&lt;span style=&quot;color:#eff1f5;&quot;&gt;(&lt;&#x2F;span&gt;&lt;span&gt;&amp;#39;&amp;#39;&lt;&#x2F;span&gt;&lt;span style=&quot;color:#eff1f5;&quot;&gt;);
&lt;&#x2F;span&gt;&lt;span style=&quot;color:#eff1f5;&quot;&gt;  &lt;&#x2F;span&gt;&lt;span style=&quot;color:#bf616a;&quot;&gt;tag &lt;&#x2F;span&gt;&lt;span&gt;= &lt;&#x2F;span&gt;&lt;span style=&quot;color:#8fa1b3;&quot;&gt;$state&lt;&#x2F;span&gt;&lt;span style=&quot;color:#eff1f5;&quot;&gt;(&lt;&#x2F;span&gt;&lt;span&gt;&amp;#39;&amp;#39;&lt;&#x2F;span&gt;&lt;span style=&quot;color:#eff1f5;&quot;&gt;);
&lt;&#x2F;span&gt;&lt;span style=&quot;color:#eff1f5;&quot;&gt;  &lt;&#x2F;span&gt;&lt;span style=&quot;color:#bf616a;&quot;&gt;secrets &lt;&#x2F;span&gt;&lt;span&gt;= &lt;&#x2F;span&gt;&lt;span style=&quot;color:#8fa1b3;&quot;&gt;$state&lt;&#x2F;span&gt;&lt;span style=&quot;color:#eff1f5;&quot;&gt;(&lt;&#x2F;span&gt;&lt;span style=&quot;color:#d08770;&quot;&gt;false&lt;&#x2F;span&gt;&lt;span style=&quot;color:#eff1f5;&quot;&gt;);
&lt;&#x2F;span&gt;&lt;span style=&quot;color:#eff1f5;&quot;&gt;
&lt;&#x2F;span&gt;&lt;span style=&quot;color:#eff1f5;&quot;&gt;  &lt;&#x2F;span&gt;&lt;span style=&quot;color:#8fa1b3;&quot;&gt;setTag&lt;&#x2F;span&gt;&lt;span&gt;(&lt;&#x2F;span&gt;&lt;span style=&quot;color:#bf616a;&quot;&gt;value&lt;&#x2F;span&gt;&lt;span&gt;: &lt;&#x2F;span&gt;&lt;span style=&quot;color:#eff1f5;&quot;&gt;string&lt;&#x2F;span&gt;&lt;span&gt;) &lt;&#x2F;span&gt;&lt;span style=&quot;color:#eff1f5;&quot;&gt;{
&lt;&#x2F;span&gt;&lt;span style=&quot;color:#eff1f5;&quot;&gt;    &lt;&#x2F;span&gt;&lt;span style=&quot;color:#bf616a;&quot;&gt;this&lt;&#x2F;span&gt;&lt;span style=&quot;color:#eff1f5;&quot;&gt;.&lt;&#x2F;span&gt;&lt;span style=&quot;color:#bf616a;&quot;&gt;tag &lt;&#x2F;span&gt;&lt;span&gt;= &lt;&#x2F;span&gt;&lt;span style=&quot;color:#bf616a;&quot;&gt;value&lt;&#x2F;span&gt;&lt;span style=&quot;color:#eff1f5;&quot;&gt;;
&lt;&#x2F;span&gt;&lt;span style=&quot;color:#eff1f5;&quot;&gt;    &lt;&#x2F;span&gt;&lt;span style=&quot;color:#bf616a;&quot;&gt;localStorage&lt;&#x2F;span&gt;&lt;span style=&quot;color:#eff1f5;&quot;&gt;.&lt;&#x2F;span&gt;&lt;span style=&quot;color:#96b5b4;&quot;&gt;setItem&lt;&#x2F;span&gt;&lt;span style=&quot;color:#eff1f5;&quot;&gt;(&lt;&#x2F;span&gt;&lt;span style=&quot;color:#bf616a;&quot;&gt;PLAYER_TAG_KEY&lt;&#x2F;span&gt;&lt;span style=&quot;color:#eff1f5;&quot;&gt;, &lt;&#x2F;span&gt;&lt;span style=&quot;color:#bf616a;&quot;&gt;value&lt;&#x2F;span&gt;&lt;span style=&quot;color:#eff1f5;&quot;&gt;);
&lt;&#x2F;span&gt;&lt;span style=&quot;color:#eff1f5;&quot;&gt;  }
&lt;&#x2F;span&gt;&lt;span style=&quot;color:#eff1f5;&quot;&gt;}
&lt;&#x2F;span&gt;&lt;span&gt;
&lt;&#x2F;span&gt;&lt;span style=&quot;color:#b48ead;&quot;&gt;export const &lt;&#x2F;span&gt;&lt;span style=&quot;color:#bf616a;&quot;&gt;playerStore &lt;&#x2F;span&gt;&lt;span&gt;= new PlayerStore();
&lt;&#x2F;span&gt;&lt;&#x2F;code&gt;&lt;&#x2F;pre&gt;
&lt;p&gt;The app uses &lt;code&gt;adapter-static&lt;&#x2F;code&gt; with &lt;code&gt;prerender = true&lt;&#x2F;code&gt; and &lt;code&gt;trailingSlash = &#x27;always&#x27;&lt;&#x2F;code&gt;. Content pages (home, about, techniques, difficulty, privacy, how-to-play, app) get fully pre-rendered to static HTML at build time. Interactive pages (&lt;code&gt;&#x2F;play&#x2F;&lt;&#x2F;code&gt; and &lt;code&gt;&#x2F;galaxy&#x2F;&lt;&#x2F;code&gt;) set &lt;code&gt;ssr = false&lt;&#x2F;code&gt; because they need browser APIs — canvas for the game, D3 DOM manipulation for the galaxy.&lt;&#x2F;p&gt;
&lt;p&gt;This hybrid approach means content pages load instantly as static HTML while the interactive pages get a lightweight shell that hydrates client-side. The build output is a directory of plain HTML, CSS, and JS files that any web server can serve. No Node.js runtime needed in production.&lt;&#x2F;p&gt;
&lt;p&gt;The previous frontend was 9 static HTML files with ~2,100 lines of inline JavaScript. The SvelteKit rewrite gave us proper component architecture, shared layouts (no more duplicated header and footer across 9 files), reactive state management, and PostHog analytics integration — all while maintaining the same zero-runtime deployment model.&lt;&#x2F;p&gt;
&lt;h2 id=&quot;deployment-kubernetes-on-a-homelab&quot;&gt;Deployment: Kubernetes on a Homelab&lt;&#x2F;h2&gt;
&lt;p&gt;Yes, I’m running a Sudoku game on Kubernetes. &lt;a href=&quot;&#x2F;posts&#x2F;when-not-to-use-k8s&#x2F;&quot;&gt;I know&lt;&#x2F;a&gt;. The cluster runs on &lt;a href=&quot;&#x2F;posts&#x2F;talos-rpi5-custom-kernel-build&#x2F;&quot;&gt;Talos Linux on Raspberry Pi 5s&lt;&#x2F;a&gt; connected via &lt;a href=&quot;&#x2F;posts&#x2F;turingpi-2gbps-lacp-bonding&#x2F;&quot;&gt;2Gbps LACP-bonded networking&lt;&#x2F;a&gt; on a Turing Pi board. The dev environment is &lt;a href=&quot;&#x2F;posts&#x2F;nix&#x2F;&quot;&gt;NixOS&lt;&#x2F;a&gt; because of course it is.&lt;&#x2F;p&gt;
&lt;p&gt;The frontend build is a multi-stage Docker image:&lt;&#x2F;p&gt;
&lt;pre data-lang=&quot;dockerfile&quot; style=&quot;background-color:#2b303b;color:#c0c5ce;&quot; class=&quot;language-dockerfile &quot;&gt;&lt;code class=&quot;language-dockerfile&quot; data-lang=&quot;dockerfile&quot;&gt;&lt;span style=&quot;color:#b48ead;&quot;&gt;FROM&lt;&#x2F;span&gt;&lt;span&gt; node:22-alpine &lt;&#x2F;span&gt;&lt;span style=&quot;color:#b48ead;&quot;&gt;AS &lt;&#x2F;span&gt;&lt;span style=&quot;color:#bf616a;&quot;&gt;build
&lt;&#x2F;span&gt;&lt;span style=&quot;color:#b48ead;&quot;&gt;WORKDIR &lt;&#x2F;span&gt;&lt;span&gt;&#x2F;app
&lt;&#x2F;span&gt;&lt;span style=&quot;color:#b48ead;&quot;&gt;COPY&lt;&#x2F;span&gt;&lt;span&gt; frontend&#x2F;package.json frontend&#x2F;package-lock.json* .&#x2F;
&lt;&#x2F;span&gt;&lt;span style=&quot;color:#b48ead;&quot;&gt;RUN &lt;&#x2F;span&gt;&lt;span&gt;npm ci
&lt;&#x2F;span&gt;&lt;span style=&quot;color:#b48ead;&quot;&gt;COPY&lt;&#x2F;span&gt;&lt;span&gt; frontend&#x2F; .
&lt;&#x2F;span&gt;&lt;span style=&quot;color:#b48ead;&quot;&gt;RUN &lt;&#x2F;span&gt;&lt;span&gt;npm run build
&lt;&#x2F;span&gt;&lt;span&gt;
&lt;&#x2F;span&gt;&lt;span style=&quot;color:#b48ead;&quot;&gt;FROM&lt;&#x2F;span&gt;&lt;span&gt; nginx:1-alpine
&lt;&#x2F;span&gt;&lt;span style=&quot;color:#b48ead;&quot;&gt;COPY&lt;&#x2F;span&gt;&lt;span&gt; --from=&lt;&#x2F;span&gt;&lt;span style=&quot;color:#bf616a;&quot;&gt;build&lt;&#x2F;span&gt;&lt;span&gt; &#x2F;app&#x2F;build &#x2F;usr&#x2F;share&#x2F;nginx&#x2F;html
&lt;&#x2F;span&gt;&lt;&#x2F;code&gt;&lt;&#x2F;pre&gt;
&lt;p&gt;Stage one builds the SvelteKit static site (including WASM assets). Stage two drops the output into Nginx Alpine. The final image is tiny — just Nginx serving static files.&lt;&#x2F;p&gt;
&lt;p&gt;The Nginx config handles the caching strategy:&lt;&#x2F;p&gt;
&lt;table&gt;&lt;thead&gt;&lt;tr&gt;&lt;th&gt;Asset Type&lt;&#x2F;th&gt;&lt;th&gt;Browser Cache&lt;&#x2F;th&gt;&lt;th&gt;Edge Cache&lt;&#x2F;th&gt;&lt;&#x2F;tr&gt;&lt;&#x2F;thead&gt;&lt;tbody&gt;
&lt;tr&gt;&lt;td&gt;&lt;code&gt;&#x2F;_app&#x2F;&lt;&#x2F;code&gt; (hashed SvelteKit assets)&lt;&#x2F;td&gt;&lt;td&gt;1 year, immutable&lt;&#x2F;td&gt;&lt;td&gt;1 year&lt;&#x2F;td&gt;&lt;&#x2F;tr&gt;
&lt;tr&gt;&lt;td&gt;&lt;code&gt;&#x2F;assets&#x2F;&lt;&#x2F;code&gt; (images, icons)&lt;&#x2F;td&gt;&lt;td&gt;30 days, immutable&lt;&#x2F;td&gt;&lt;td&gt;30 days&lt;&#x2F;td&gt;&lt;&#x2F;tr&gt;
&lt;tr&gt;&lt;td&gt;&lt;code&gt;&#x2F;wasm&#x2F;&lt;&#x2F;code&gt;&lt;&#x2F;td&gt;&lt;td&gt;1 hour, must-revalidate&lt;&#x2F;td&gt;&lt;td&gt;24 hours&lt;&#x2F;td&gt;&lt;&#x2F;tr&gt;
&lt;tr&gt;&lt;td&gt;Pages&lt;&#x2F;td&gt;&lt;td&gt;60 seconds&lt;&#x2F;td&gt;&lt;td&gt;5 minutes, stale-while-revalidate&lt;&#x2F;td&gt;&lt;&#x2F;tr&gt;
&lt;tr&gt;&lt;td&gt;&lt;code&gt;&#x2F;api&#x2F;&lt;&#x2F;code&gt;&lt;&#x2F;td&gt;&lt;td&gt;no-store&lt;&#x2F;td&gt;&lt;td&gt;no-store&lt;&#x2F;td&gt;&lt;&#x2F;tr&gt;
&lt;&#x2F;tbody&gt;&lt;&#x2F;table&gt;
&lt;p&gt;Hashed assets get immutable caching because the hash changes on every build. WASM gets shorter caching because I might update the engine without changing the filename. API responses are never cached. Pages get short browser caches with longer edge caches and &lt;code&gt;stale-while-revalidate&lt;&#x2F;code&gt; so users see content immediately while the cache refreshes in the background.&lt;&#x2F;p&gt;
&lt;p&gt;The API backend runs as a separate K8s service (&lt;code&gt;api.ukodus.svc.cluster.local:3000&lt;&#x2F;code&gt;) that Nginx proxies to via Kubernetes DNS. Security headers (HSTS, X-Content-Type-Options, X-Frame-Options, Referrer-Policy) are set globally, and the WASM location block adds &lt;code&gt;Cross-Origin-Embedder-Policy&lt;&#x2F;code&gt; and &lt;code&gt;Cross-Origin-Opener-Policy&lt;&#x2F;code&gt; for &lt;code&gt;SharedArrayBuffer&lt;&#x2F;code&gt; compatibility.&lt;&#x2F;p&gt;
&lt;p&gt;SvelteKit’s &lt;code&gt;adapter-static&lt;&#x2F;code&gt; generates pre-compressed &lt;code&gt;.br&lt;&#x2F;code&gt; and &lt;code&gt;.gz&lt;&#x2F;code&gt; files at build time. Nginx serves these directly with &lt;code&gt;gzip_static on&lt;&#x2F;code&gt;, so there’s zero CPU overhead for compression at request time.&lt;&#x2F;p&gt;
&lt;h2 id=&quot;what-s-next&quot;&gt;What’s Next&lt;&#x2F;h2&gt;
&lt;p&gt;The engine still has room for more techniques. There are some fish variants and chain patterns I haven’t implemented yet. The iOS app needs feature parity with the web version’s galaxy view. And I keep thinking about adding a puzzle-of-the-day feature with global leaderboards.&lt;&#x2F;p&gt;
&lt;p&gt;But for now, it’s a Sudoku app that doesn’t have ads, doesn’t require a subscription, and teaches you actual solving techniques instead of just filling in answers. Which is all I wanted in the first place.&lt;&#x2F;p&gt;
&lt;hr &#x2F;&gt;
&lt;p&gt;&lt;strong&gt;Play it&lt;&#x2F;strong&gt;: &lt;a href=&quot;https:&#x2F;&#x2F;ukodus.now&quot;&gt;ukodus.now&lt;&#x2F;a&gt;
&lt;strong&gt;Source&lt;&#x2F;strong&gt;: &lt;a href=&quot;https:&#x2F;&#x2F;github.com&#x2F;kcirtapfromspace&#x2F;sudoku-core&quot;&gt;github.com&#x2F;kcirtapfromspace&#x2F;sudoku-core&lt;&#x2F;a&gt;
&lt;strong&gt;iOS App&lt;&#x2F;strong&gt;: &lt;a href=&quot;https:&#x2F;&#x2F;apps.apple.com&#x2F;us&#x2F;app&#x2F;sudoku&#x2F;id6758485043&quot;&gt;App Store&lt;&#x2F;a&gt;&lt;&#x2F;p&gt;
</description>
      </item>
      <item>
          <title>Building Talos Linux for Raspberry Pi 5 with Custom Kernel</title>
          <pubDate>Sun, 01 Feb 2026 00:00:00 -0700</pubDate>
          <author>Unknown</author>
          <link>https://kcirtapfromspace.github.io/kcirtap-blog/posts/talos-rpi5-custom-kernel-build/</link>
          <guid>https://kcirtapfromspace.github.io/kcirtap-blog/posts/talos-rpi5-custom-kernel-build/</guid>
          <description xml:base="https://kcirtapfromspace.github.io/kcirtap-blog/posts/talos-rpi5-custom-kernel-build/">&lt;h1 id=&quot;building-talos-linux-for-raspberry-pi-5-with-custom-kernel&quot;&gt;Building Talos Linux for Raspberry Pi 5 with Custom Kernel&lt;&#x2F;h1&gt;
&lt;p&gt;Getting Talos Linux running on the Raspberry Pi 5 requires a custom kernel build. The stock Talos kernel doesn’t include the necessary drivers for RPi5’s RP1 PCIe controller and MACB ethernet. This post documents the complete build process and the issues I encountered along the way.&lt;&#x2F;p&gt;
&lt;h2 id=&quot;the-problem&quot;&gt;The Problem&lt;&#x2F;h2&gt;
&lt;p&gt;The RPi5 uses a new architecture with the RP1 southbridge chip connected via PCIe. This means:&lt;&#x2F;p&gt;
&lt;ul&gt;
&lt;li&gt;Different ethernet driver (MACB + Broadcom BCM54213PE PHY)&lt;&#x2F;li&gt;
&lt;li&gt;Different device tree files (bcm2712 instead of bcm2711)&lt;&#x2F;li&gt;
&lt;li&gt;Network interface named &lt;code&gt;end0&lt;&#x2F;code&gt; instead of &lt;code&gt;eth0&lt;&#x2F;code&gt;&lt;&#x2F;li&gt;
&lt;li&gt;Potential 4K vs 16K page size conflicts&lt;&#x2F;li&gt;
&lt;&#x2F;ul&gt;
&lt;h2 id=&quot;prerequisites&quot;&gt;Prerequisites&lt;&#x2F;h2&gt;
&lt;ul&gt;
&lt;li&gt;Docker with buildx support&lt;&#x2F;li&gt;
&lt;li&gt;Local container registry (I used &lt;code&gt;localhost:5001&lt;&#x2F;code&gt;)&lt;&#x2F;li&gt;
&lt;li&gt;~50GB disk space for builds&lt;&#x2F;li&gt;
&lt;li&gt;ARM64 cross-compilation support&lt;&#x2F;li&gt;
&lt;&#x2F;ul&gt;
&lt;h2 id=&quot;repository-structure&quot;&gt;Repository Structure&lt;&#x2F;h2&gt;
&lt;p&gt;I created a monorepo with Talos and related projects as submodules:&lt;&#x2F;p&gt;
&lt;pre style=&quot;background-color:#2b303b;color:#c0c5ce;&quot;&gt;&lt;code&gt;&lt;span&gt;talso-rpi5&#x2F;
&lt;&#x2F;span&gt;&lt;span&gt;├── checkouts&#x2F;
&lt;&#x2F;span&gt;&lt;span&gt;│   ├── pkgs&#x2F;              # kernel builds
&lt;&#x2F;span&gt;&lt;span&gt;│   ├── talos&#x2F;             # main OS
&lt;&#x2F;span&gt;&lt;span&gt;│   └── sbc-raspberrypi5&#x2F;  # RPi5 overlay
&lt;&#x2F;span&gt;&lt;span&gt;├── _out&#x2F;                  # Build outputs
&lt;&#x2F;span&gt;&lt;span&gt;├── CLAUDE.md              # Build documentation
&lt;&#x2F;span&gt;&lt;span&gt;└── AGENTS.md              # Task delegation docs
&lt;&#x2F;span&gt;&lt;&#x2F;code&gt;&lt;&#x2F;pre&gt;
&lt;h2 id=&quot;step-1-build-the-custom-kernel&quot;&gt;Step 1: Build the Custom Kernel&lt;&#x2F;h2&gt;
&lt;p&gt;The kernel needs specific configuration for RPi5:&lt;&#x2F;p&gt;
&lt;pre data-lang=&quot;bash&quot; style=&quot;background-color:#2b303b;color:#c0c5ce;&quot; class=&quot;language-bash &quot;&gt;&lt;code class=&quot;language-bash&quot; data-lang=&quot;bash&quot;&gt;&lt;span style=&quot;color:#65737e;&quot;&gt;# Clear Docker cache to ensure fresh build
&lt;&#x2F;span&gt;&lt;span style=&quot;color:#bf616a;&quot;&gt;docker&lt;&#x2F;span&gt;&lt;span&gt; builder prune&lt;&#x2F;span&gt;&lt;span style=&quot;color:#bf616a;&quot;&gt; -af
&lt;&#x2F;span&gt;&lt;span&gt;
&lt;&#x2F;span&gt;&lt;span style=&quot;color:#65737e;&quot;&gt;# Build kernel for arm64
&lt;&#x2F;span&gt;&lt;span style=&quot;color:#96b5b4;&quot;&gt;cd&lt;&#x2F;span&gt;&lt;span&gt; checkouts&#x2F;pkgs
&lt;&#x2F;span&gt;&lt;span style=&quot;color:#bf616a;&quot;&gt;gmake&lt;&#x2F;span&gt;&lt;span&gt; kernel PLATFORM=linux&#x2F;arm64 PUSH=true \
&lt;&#x2F;span&gt;&lt;span&gt;  REGISTRY=localhost:5001 USERNAME=wittenbude
&lt;&#x2F;span&gt;&lt;&#x2F;code&gt;&lt;&#x2F;pre&gt;
&lt;h3 id=&quot;critical-kernel-config-settings&quot;&gt;Critical Kernel Config Settings&lt;&#x2F;h3&gt;
&lt;p&gt;In &lt;code&gt;checkouts&#x2F;pkgs&#x2F;kernel&#x2F;build&#x2F;config-arm64&lt;&#x2F;code&gt;:&lt;&#x2F;p&gt;
&lt;pre style=&quot;background-color:#2b303b;color:#c0c5ce;&quot;&gt;&lt;code&gt;&lt;span&gt;# Use 4K pages (16K causes Talos mount API issues)
&lt;&#x2F;span&gt;&lt;span&gt;CONFIG_ARM64_4K_PAGES=y
&lt;&#x2F;span&gt;&lt;span&gt;# CONFIG_ARM64_16K_PAGES is not set
&lt;&#x2F;span&gt;&lt;span&gt;
&lt;&#x2F;span&gt;&lt;span&gt;# SD card drivers must be built-in (not modules)
&lt;&#x2F;span&gt;&lt;span&gt;CONFIG_MMC_SDHCI_PLTFM=y
&lt;&#x2F;span&gt;&lt;span&gt;CONFIG_MMC_SDHCI_BRCMSTB=y
&lt;&#x2F;span&gt;&lt;span&gt;
&lt;&#x2F;span&gt;&lt;span&gt;# Ethernet support
&lt;&#x2F;span&gt;&lt;span&gt;CONFIG_MACB=y
&lt;&#x2F;span&gt;&lt;span&gt;CONFIG_PHYLIB=y
&lt;&#x2F;span&gt;&lt;span&gt;CONFIG_PHYLINK=y
&lt;&#x2F;span&gt;&lt;span&gt;CONFIG_BROADCOM_PHY=y
&lt;&#x2F;span&gt;&lt;&#x2F;code&gt;&lt;&#x2F;pre&gt;
&lt;p&gt;The 4K vs 16K page dilemma:&lt;&#x2F;p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;16K pages&lt;&#x2F;strong&gt;: Networking works, but Talos shadow bind mounts fail with EINVAL&lt;&#x2F;li&gt;
&lt;li&gt;&lt;strong&gt;4K pages&lt;&#x2F;strong&gt;: Talos boots cleanly, networking requires matching kernel and DTB versions&lt;&#x2F;li&gt;
&lt;&#x2F;ul&gt;
&lt;h2 id=&quot;step-2-build-the-imager&quot;&gt;Step 2: Build the Imager&lt;&#x2F;h2&gt;
&lt;p&gt;The imager creates the final bootable image. It needs to reference our custom kernel:&lt;&#x2F;p&gt;
&lt;pre data-lang=&quot;bash&quot; style=&quot;background-color:#2b303b;color:#c0c5ce;&quot; class=&quot;language-bash &quot;&gt;&lt;code class=&quot;language-bash&quot; data-lang=&quot;bash&quot;&gt;&lt;span style=&quot;color:#96b5b4;&quot;&gt;cd&lt;&#x2F;span&gt;&lt;span&gt; checkouts&#x2F;talos
&lt;&#x2F;span&gt;&lt;span&gt;
&lt;&#x2F;span&gt;&lt;span style=&quot;color:#65737e;&quot;&gt;# Get the kernel tag
&lt;&#x2F;span&gt;&lt;span style=&quot;color:#bf616a;&quot;&gt;KERNEL_TAG&lt;&#x2F;span&gt;&lt;span&gt;=$&lt;&#x2F;span&gt;&lt;span style=&quot;color:#a3be8c;&quot;&gt;(&lt;&#x2F;span&gt;&lt;span style=&quot;color:#96b5b4;&quot;&gt;cd&lt;&#x2F;span&gt;&lt;span style=&quot;color:#a3be8c;&quot;&gt; ..&#x2F;pkgs &lt;&#x2F;span&gt;&lt;span&gt;&amp;amp;&amp;amp; &lt;&#x2F;span&gt;&lt;span style=&quot;color:#bf616a;&quot;&gt;git&lt;&#x2F;span&gt;&lt;span style=&quot;color:#a3be8c;&quot;&gt; describe&lt;&#x2F;span&gt;&lt;span style=&quot;color:#bf616a;&quot;&gt; --tag --always --dirty&lt;&#x2F;span&gt;&lt;span style=&quot;color:#a3be8c;&quot;&gt;)
&lt;&#x2F;span&gt;&lt;span&gt;
&lt;&#x2F;span&gt;&lt;span style=&quot;color:#65737e;&quot;&gt;# Build imager with custom kernel
&lt;&#x2F;span&gt;&lt;span style=&quot;color:#bf616a;&quot;&gt;gmake&lt;&#x2F;span&gt;&lt;span&gt; imager PLATFORM=linux&#x2F;arm64 PUSH=true \
&lt;&#x2F;span&gt;&lt;span&gt;  REGISTRY=localhost:5001 USERNAME=wittenbude \
&lt;&#x2F;span&gt;&lt;span&gt;  PKG_KERNEL=localhost:5001&#x2F;wittenbude&#x2F;kernel:$&lt;&#x2F;span&gt;&lt;span style=&quot;color:#bf616a;&quot;&gt;KERNEL_TAG
&lt;&#x2F;span&gt;&lt;&#x2F;code&gt;&lt;&#x2F;pre&gt;
&lt;h2 id=&quot;step-3-build-the-rpi5-overlay&quot;&gt;Step 3: Build the RPi5 Overlay&lt;&#x2F;h2&gt;
&lt;p&gt;The overlay provides RPi5-specific device tree files and firmware:&lt;&#x2F;p&gt;
&lt;pre data-lang=&quot;bash&quot; style=&quot;background-color:#2b303b;color:#c0c5ce;&quot; class=&quot;language-bash &quot;&gt;&lt;code class=&quot;language-bash&quot; data-lang=&quot;bash&quot;&gt;&lt;span style=&quot;color:#96b5b4;&quot;&gt;cd&lt;&#x2F;span&gt;&lt;span&gt; checkouts&#x2F;sbc-raspberrypi5
&lt;&#x2F;span&gt;&lt;span&gt;
&lt;&#x2F;span&gt;&lt;span style=&quot;color:#65737e;&quot;&gt;# Get kernel tag for PKGS reference
&lt;&#x2F;span&gt;&lt;span style=&quot;color:#bf616a;&quot;&gt;KERNEL_TAG&lt;&#x2F;span&gt;&lt;span&gt;=$&lt;&#x2F;span&gt;&lt;span style=&quot;color:#a3be8c;&quot;&gt;(&lt;&#x2F;span&gt;&lt;span style=&quot;color:#96b5b4;&quot;&gt;cd&lt;&#x2F;span&gt;&lt;span style=&quot;color:#a3be8c;&quot;&gt; ..&#x2F;pkgs &lt;&#x2F;span&gt;&lt;span&gt;&amp;amp;&amp;amp; &lt;&#x2F;span&gt;&lt;span style=&quot;color:#bf616a;&quot;&gt;git&lt;&#x2F;span&gt;&lt;span style=&quot;color:#a3be8c;&quot;&gt; describe&lt;&#x2F;span&gt;&lt;span style=&quot;color:#bf616a;&quot;&gt; --tag --always --dirty&lt;&#x2F;span&gt;&lt;span style=&quot;color:#a3be8c;&quot;&gt;)
&lt;&#x2F;span&gt;&lt;span&gt;
&lt;&#x2F;span&gt;&lt;span style=&quot;color:#65737e;&quot;&gt;# Build overlay with our custom kernel&amp;#39;s DTBs
&lt;&#x2F;span&gt;&lt;span style=&quot;color:#bf616a;&quot;&gt;gmake&lt;&#x2F;span&gt;&lt;span&gt; PLATFORM=linux&#x2F;arm64 PUSH=true \
&lt;&#x2F;span&gt;&lt;span&gt;  REGISTRY=localhost:5001 USERNAME=wittenbude \
&lt;&#x2F;span&gt;&lt;span&gt;  PKGS_PREFIX=localhost:5001&#x2F;wittenbude PKGS=$&lt;&#x2F;span&gt;&lt;span style=&quot;color:#bf616a;&quot;&gt;KERNEL_TAG
&lt;&#x2F;span&gt;&lt;&#x2F;code&gt;&lt;&#x2F;pre&gt;
&lt;p&gt;This ensures the DTB files match the kernel version - critical for RP1 initialization.&lt;&#x2F;p&gt;
&lt;h2 id=&quot;step-4-generate-the-metal-image&quot;&gt;Step 4: Generate the Metal Image&lt;&#x2F;h2&gt;
&lt;pre data-lang=&quot;bash&quot; style=&quot;background-color:#2b303b;color:#c0c5ce;&quot; class=&quot;language-bash &quot;&gt;&lt;code class=&quot;language-bash&quot; data-lang=&quot;bash&quot;&gt;&lt;span style=&quot;color:#96b5b4;&quot;&gt;cd&lt;&#x2F;span&gt;&lt;span&gt; checkouts&#x2F;talos
&lt;&#x2F;span&gt;&lt;span&gt;
&lt;&#x2F;span&gt;&lt;span style=&quot;color:#bf616a;&quot;&gt;IMAGER_TAG&lt;&#x2F;span&gt;&lt;span&gt;=$&lt;&#x2F;span&gt;&lt;span style=&quot;color:#a3be8c;&quot;&gt;(&lt;&#x2F;span&gt;&lt;span style=&quot;color:#bf616a;&quot;&gt;git&lt;&#x2F;span&gt;&lt;span style=&quot;color:#a3be8c;&quot;&gt; describe&lt;&#x2F;span&gt;&lt;span style=&quot;color:#bf616a;&quot;&gt; --tag --always --dirty&lt;&#x2F;span&gt;&lt;span style=&quot;color:#a3be8c;&quot;&gt;)
&lt;&#x2F;span&gt;&lt;span style=&quot;color:#bf616a;&quot;&gt;OVERLAY_TAG&lt;&#x2F;span&gt;&lt;span&gt;=$&lt;&#x2F;span&gt;&lt;span style=&quot;color:#a3be8c;&quot;&gt;(&lt;&#x2F;span&gt;&lt;span style=&quot;color:#96b5b4;&quot;&gt;cd&lt;&#x2F;span&gt;&lt;span style=&quot;color:#a3be8c;&quot;&gt; ..&#x2F;sbc-raspberrypi5 &lt;&#x2F;span&gt;&lt;span&gt;&amp;amp;&amp;amp; &lt;&#x2F;span&gt;&lt;span style=&quot;color:#bf616a;&quot;&gt;git&lt;&#x2F;span&gt;&lt;span style=&quot;color:#a3be8c;&quot;&gt; describe&lt;&#x2F;span&gt;&lt;span style=&quot;color:#bf616a;&quot;&gt; --tag --always --dirty&lt;&#x2F;span&gt;&lt;span style=&quot;color:#a3be8c;&quot;&gt;)
&lt;&#x2F;span&gt;&lt;span&gt;
&lt;&#x2F;span&gt;&lt;span style=&quot;color:#bf616a;&quot;&gt;docker&lt;&#x2F;span&gt;&lt;span&gt; run&lt;&#x2F;span&gt;&lt;span style=&quot;color:#bf616a;&quot;&gt; --rm -t --network&lt;&#x2F;span&gt;&lt;span&gt;=host \
&lt;&#x2F;span&gt;&lt;span style=&quot;color:#bf616a;&quot;&gt;  -v&lt;&#x2F;span&gt;&lt;span&gt; .&#x2F;_out:&#x2F;out&lt;&#x2F;span&gt;&lt;span style=&quot;color:#bf616a;&quot;&gt; -v&lt;&#x2F;span&gt;&lt;span&gt; &#x2F;dev:&#x2F;dev&lt;&#x2F;span&gt;&lt;span style=&quot;color:#bf616a;&quot;&gt; --privileged &lt;&#x2F;span&gt;&lt;span&gt;\
&lt;&#x2F;span&gt;&lt;span&gt;  localhost:5001&#x2F;wittenbude&#x2F;imager:$&lt;&#x2F;span&gt;&lt;span style=&quot;color:#bf616a;&quot;&gt;IMAGER_TAG &lt;&#x2F;span&gt;&lt;span&gt;\
&lt;&#x2F;span&gt;&lt;span&gt;  metal&lt;&#x2F;span&gt;&lt;span style=&quot;color:#bf616a;&quot;&gt; --arch&lt;&#x2F;span&gt;&lt;span&gt; arm64 \
&lt;&#x2F;span&gt;&lt;span style=&quot;color:#bf616a;&quot;&gt;  --board&lt;&#x2F;span&gt;&lt;span&gt;=rpi_generic \
&lt;&#x2F;span&gt;&lt;span style=&quot;color:#bf616a;&quot;&gt;  --overlay-name&lt;&#x2F;span&gt;&lt;span&gt;=rpi5 \
&lt;&#x2F;span&gt;&lt;span style=&quot;color:#bf616a;&quot;&gt;  --overlay-image&lt;&#x2F;span&gt;&lt;span&gt;=localhost:5001&#x2F;wittenbude&#x2F;sbc-raspberrypi5:$&lt;&#x2F;span&gt;&lt;span style=&quot;color:#bf616a;&quot;&gt;OVERLAY_TAG &lt;&#x2F;span&gt;&lt;span&gt;\
&lt;&#x2F;span&gt;&lt;span style=&quot;color:#bf616a;&quot;&gt;  --extra-kernel-arg&lt;&#x2F;span&gt;&lt;span&gt;=&amp;quot;&lt;&#x2F;span&gt;&lt;span style=&quot;color:#a3be8c;&quot;&gt;console=tty1&lt;&#x2F;span&gt;&lt;span&gt;&amp;quot; \
&lt;&#x2F;span&gt;&lt;span style=&quot;color:#bf616a;&quot;&gt;  --extra-kernel-arg&lt;&#x2F;span&gt;&lt;span&gt;=&amp;quot;&lt;&#x2F;span&gt;&lt;span style=&quot;color:#a3be8c;&quot;&gt;console=ttyAMA10,115200&lt;&#x2F;span&gt;&lt;span&gt;&amp;quot; \
&lt;&#x2F;span&gt;&lt;span style=&quot;color:#bf616a;&quot;&gt;  --system-extension-image&lt;&#x2F;span&gt;&lt;span&gt;=ghcr.io&#x2F;siderolabs&#x2F;iscsi-tools:v0.1.10
&lt;&#x2F;span&gt;&lt;&#x2F;code&gt;&lt;&#x2F;pre&gt;
&lt;p&gt;Output: &lt;code&gt;_out&#x2F;metal-arm64.raw.zst&lt;&#x2F;code&gt; (~88MB)&lt;&#x2F;p&gt;
&lt;h2 id=&quot;step-5-flash-and-boot&quot;&gt;Step 5: Flash and Boot&lt;&#x2F;h2&gt;
&lt;pre data-lang=&quot;bash&quot; style=&quot;background-color:#2b303b;color:#c0c5ce;&quot; class=&quot;language-bash &quot;&gt;&lt;code class=&quot;language-bash&quot; data-lang=&quot;bash&quot;&gt;&lt;span style=&quot;color:#65737e;&quot;&gt;# Identify SD card
&lt;&#x2F;span&gt;&lt;span style=&quot;color:#bf616a;&quot;&gt;diskutil&lt;&#x2F;span&gt;&lt;span&gt; list
&lt;&#x2F;span&gt;&lt;span&gt;
&lt;&#x2F;span&gt;&lt;span style=&quot;color:#65737e;&quot;&gt;# Unmount
&lt;&#x2F;span&gt;&lt;span style=&quot;color:#bf616a;&quot;&gt;diskutil&lt;&#x2F;span&gt;&lt;span&gt; unmountDisk &#x2F;dev&#x2F;disk4
&lt;&#x2F;span&gt;&lt;span&gt;
&lt;&#x2F;span&gt;&lt;span style=&quot;color:#65737e;&quot;&gt;# Flash (use rdisk for faster writes)
&lt;&#x2F;span&gt;&lt;span style=&quot;color:#bf616a;&quot;&gt;zstd -d -c&lt;&#x2F;span&gt;&lt;span&gt; _out&#x2F;metal-arm64.raw.zst | &lt;&#x2F;span&gt;&lt;span style=&quot;color:#bf616a;&quot;&gt;sudo&lt;&#x2F;span&gt;&lt;span&gt; dd of=&#x2F;dev&#x2F;rdisk4 bs=4m status=progress
&lt;&#x2F;span&gt;&lt;span&gt;
&lt;&#x2F;span&gt;&lt;span style=&quot;color:#65737e;&quot;&gt;# Eject
&lt;&#x2F;span&gt;&lt;span style=&quot;color:#bf616a;&quot;&gt;diskutil&lt;&#x2F;span&gt;&lt;span&gt; eject &#x2F;dev&#x2F;disk4
&lt;&#x2F;span&gt;&lt;&#x2F;code&gt;&lt;&#x2F;pre&gt;
&lt;h2 id=&quot;step-6-apply-worker-configuration&quot;&gt;Step 6: Apply Worker Configuration&lt;&#x2F;h2&gt;
&lt;p&gt;Once booted, the node enters maintenance mode. Apply the worker config:&lt;&#x2F;p&gt;
&lt;pre data-lang=&quot;bash&quot; style=&quot;background-color:#2b303b;color:#c0c5ce;&quot; class=&quot;language-bash &quot;&gt;&lt;code class=&quot;language-bash&quot; data-lang=&quot;bash&quot;&gt;&lt;span style=&quot;color:#65737e;&quot;&gt;# In maintenance mode (first boot)
&lt;&#x2F;span&gt;&lt;span style=&quot;color:#bf616a;&quot;&gt;talosctl&lt;&#x2F;span&gt;&lt;span&gt; apply-config&lt;&#x2F;span&gt;&lt;span style=&quot;color:#bf616a;&quot;&gt; --insecure --nodes &lt;&#x2F;span&gt;&lt;span&gt;&amp;lt;NODE_IP&amp;gt; --file worker.yaml
&lt;&#x2F;span&gt;&lt;span&gt;
&lt;&#x2F;span&gt;&lt;span style=&quot;color:#65737e;&quot;&gt;# After leaving maintenance mode
&lt;&#x2F;span&gt;&lt;span style=&quot;color:#bf616a;&quot;&gt;talosctl -n &lt;&#x2F;span&gt;&lt;span&gt;&amp;lt;NODE_IP&amp;gt; apply-config&lt;&#x2F;span&gt;&lt;span style=&quot;color:#bf616a;&quot;&gt; --file&lt;&#x2F;span&gt;&lt;span&gt; worker.yaml
&lt;&#x2F;span&gt;&lt;&#x2F;code&gt;&lt;&#x2F;pre&gt;
&lt;h3 id=&quot;critical-enable-discovery&quot;&gt;Critical: Enable Discovery&lt;&#x2F;h3&gt;
&lt;p&gt;The worker config &lt;strong&gt;must&lt;&#x2F;strong&gt; include cluster discovery settings, or the node won’t appear in cluster membership:&lt;&#x2F;p&gt;
&lt;pre data-lang=&quot;yaml&quot; style=&quot;background-color:#2b303b;color:#c0c5ce;&quot; class=&quot;language-yaml &quot;&gt;&lt;code class=&quot;language-yaml&quot; data-lang=&quot;yaml&quot;&gt;&lt;span style=&quot;color:#bf616a;&quot;&gt;cluster&lt;&#x2F;span&gt;&lt;span&gt;:
&lt;&#x2F;span&gt;&lt;span&gt;  &lt;&#x2F;span&gt;&lt;span style=&quot;color:#bf616a;&quot;&gt;id&lt;&#x2F;span&gt;&lt;span&gt;: &lt;&#x2F;span&gt;&lt;span style=&quot;color:#a3be8c;&quot;&gt;&amp;lt;CLUSTER_ID&amp;gt;
&lt;&#x2F;span&gt;&lt;span&gt;  &lt;&#x2F;span&gt;&lt;span style=&quot;color:#bf616a;&quot;&gt;secret&lt;&#x2F;span&gt;&lt;span&gt;: &lt;&#x2F;span&gt;&lt;span style=&quot;color:#a3be8c;&quot;&gt;&amp;lt;CLUSTER_SECRET&amp;gt;  &lt;&#x2F;span&gt;&lt;span style=&quot;color:#65737e;&quot;&gt;# Required for discovery auth
&lt;&#x2F;span&gt;&lt;span&gt;  &lt;&#x2F;span&gt;&lt;span style=&quot;color:#bf616a;&quot;&gt;controlPlane&lt;&#x2F;span&gt;&lt;span&gt;:
&lt;&#x2F;span&gt;&lt;span&gt;    &lt;&#x2F;span&gt;&lt;span style=&quot;color:#bf616a;&quot;&gt;endpoint&lt;&#x2F;span&gt;&lt;span&gt;: &lt;&#x2F;span&gt;&lt;span style=&quot;color:#a3be8c;&quot;&gt;https:&#x2F;&#x2F;&amp;lt;CONTROL_PLANE_IP&amp;gt;:6443
&lt;&#x2F;span&gt;&lt;span&gt;  &lt;&#x2F;span&gt;&lt;span style=&quot;color:#bf616a;&quot;&gt;clusterName&lt;&#x2F;span&gt;&lt;span&gt;: &lt;&#x2F;span&gt;&lt;span style=&quot;color:#a3be8c;&quot;&gt;&amp;lt;CLUSTER_NAME&amp;gt;
&lt;&#x2F;span&gt;&lt;span&gt;  &lt;&#x2F;span&gt;&lt;span style=&quot;color:#bf616a;&quot;&gt;discovery&lt;&#x2F;span&gt;&lt;span&gt;:
&lt;&#x2F;span&gt;&lt;span&gt;    &lt;&#x2F;span&gt;&lt;span style=&quot;color:#bf616a;&quot;&gt;enabled&lt;&#x2F;span&gt;&lt;span&gt;: &lt;&#x2F;span&gt;&lt;span style=&quot;color:#d08770;&quot;&gt;true  &lt;&#x2F;span&gt;&lt;span style=&quot;color:#65737e;&quot;&gt;# THIS IS CRITICAL
&lt;&#x2F;span&gt;&lt;span&gt;    &lt;&#x2F;span&gt;&lt;span style=&quot;color:#bf616a;&quot;&gt;registries&lt;&#x2F;span&gt;&lt;span&gt;:
&lt;&#x2F;span&gt;&lt;span&gt;      &lt;&#x2F;span&gt;&lt;span style=&quot;color:#bf616a;&quot;&gt;service&lt;&#x2F;span&gt;&lt;span&gt;:
&lt;&#x2F;span&gt;&lt;span&gt;        &lt;&#x2F;span&gt;&lt;span style=&quot;color:#bf616a;&quot;&gt;endpoint&lt;&#x2F;span&gt;&lt;span&gt;: &lt;&#x2F;span&gt;&lt;span style=&quot;color:#a3be8c;&quot;&gt;https:&#x2F;&#x2F;discovery.talos.dev&#x2F;
&lt;&#x2F;span&gt;&lt;span&gt;      &lt;&#x2F;span&gt;&lt;span style=&quot;color:#bf616a;&quot;&gt;kubernetes&lt;&#x2F;span&gt;&lt;span&gt;:
&lt;&#x2F;span&gt;&lt;span&gt;        &lt;&#x2F;span&gt;&lt;span style=&quot;color:#bf616a;&quot;&gt;disabled&lt;&#x2F;span&gt;&lt;span&gt;: &lt;&#x2F;span&gt;&lt;span style=&quot;color:#d08770;&quot;&gt;true
&lt;&#x2F;span&gt;&lt;&#x2F;code&gt;&lt;&#x2F;pre&gt;
&lt;p&gt;Without &lt;code&gt;discovery.enabled: true&lt;&#x2F;code&gt; and the &lt;code&gt;cluster.secret&lt;&#x2F;code&gt;, the node boots and kubelet runs, but &lt;code&gt;talosctl get members&lt;&#x2F;code&gt; won’t show it.&lt;&#x2F;p&gt;
&lt;h2 id=&quot;network-interface&quot;&gt;Network Interface&lt;&#x2F;h2&gt;
&lt;p&gt;RPi5 uses &lt;code&gt;end0&lt;&#x2F;code&gt; for ethernet (not &lt;code&gt;eth0&lt;&#x2F;code&gt;):&lt;&#x2F;p&gt;
&lt;pre data-lang=&quot;yaml&quot; style=&quot;background-color:#2b303b;color:#c0c5ce;&quot; class=&quot;language-yaml &quot;&gt;&lt;code class=&quot;language-yaml&quot; data-lang=&quot;yaml&quot;&gt;&lt;span style=&quot;color:#bf616a;&quot;&gt;machine&lt;&#x2F;span&gt;&lt;span&gt;:
&lt;&#x2F;span&gt;&lt;span&gt;  &lt;&#x2F;span&gt;&lt;span style=&quot;color:#bf616a;&quot;&gt;network&lt;&#x2F;span&gt;&lt;span&gt;:
&lt;&#x2F;span&gt;&lt;span&gt;    &lt;&#x2F;span&gt;&lt;span style=&quot;color:#bf616a;&quot;&gt;interfaces&lt;&#x2F;span&gt;&lt;span&gt;:
&lt;&#x2F;span&gt;&lt;span&gt;      - &lt;&#x2F;span&gt;&lt;span style=&quot;color:#bf616a;&quot;&gt;interface&lt;&#x2F;span&gt;&lt;span&gt;: &lt;&#x2F;span&gt;&lt;span style=&quot;color:#a3be8c;&quot;&gt;end0
&lt;&#x2F;span&gt;&lt;span&gt;        &lt;&#x2F;span&gt;&lt;span style=&quot;color:#bf616a;&quot;&gt;dhcp&lt;&#x2F;span&gt;&lt;span&gt;: &lt;&#x2F;span&gt;&lt;span style=&quot;color:#d08770;&quot;&gt;true
&lt;&#x2F;span&gt;&lt;&#x2F;code&gt;&lt;&#x2F;pre&gt;
&lt;h2 id=&quot;uart-debugging&quot;&gt;UART Debugging&lt;&#x2F;h2&gt;
&lt;p&gt;For boot issues, connect a UART adapter to the GPIO pins:&lt;&#x2F;p&gt;
&lt;pre data-lang=&quot;bash&quot; style=&quot;background-color:#2b303b;color:#c0c5ce;&quot; class=&quot;language-bash &quot;&gt;&lt;code class=&quot;language-bash&quot; data-lang=&quot;bash&quot;&gt;&lt;span style=&quot;color:#65737e;&quot;&gt;# Start logging session
&lt;&#x2F;span&gt;&lt;span style=&quot;color:#bf616a;&quot;&gt;screen -L -Logfile&lt;&#x2F;span&gt;&lt;span&gt; &#x2F;tmp&#x2F;uart.log &#x2F;dev&#x2F;cu.usbserial-* 115200
&lt;&#x2F;span&gt;&lt;&#x2F;code&gt;&lt;&#x2F;pre&gt;
&lt;p&gt;Key boot messages to watch for:&lt;&#x2F;p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;code&gt;BL31&lt;&#x2F;code&gt; - ARM Trusted Firmware starting&lt;&#x2F;li&gt;
&lt;li&gt;&lt;code&gt;mmc0: new&lt;&#x2F;code&gt; - SD card detected&lt;&#x2F;li&gt;
&lt;li&gt;&lt;code&gt;macb&lt;&#x2F;code&gt; - Ethernet driver loading&lt;&#x2F;li&gt;
&lt;li&gt;&lt;code&gt;BCM54213PE&lt;&#x2F;code&gt; - PHY detected&lt;&#x2F;li&gt;
&lt;li&gt;&lt;code&gt;machined&lt;&#x2F;code&gt; - Talos starting&lt;&#x2F;li&gt;
&lt;&#x2F;ul&gt;
&lt;h2 id=&quot;troubleshooting&quot;&gt;Troubleshooting&lt;&#x2F;h2&gt;
&lt;h3 id=&quot;no-network-activity-nic-lights-off&quot;&gt;No Network Activity (NIC lights off)&lt;&#x2F;h3&gt;
&lt;ul&gt;
&lt;li&gt;Check kernel&#x2F;DTB version mismatch&lt;&#x2F;li&gt;
&lt;li&gt;Verify RP1 PCIe initialization in dmesg&lt;&#x2F;li&gt;
&lt;li&gt;Consider 16K page kernel if RP1 isn’t initializing&lt;&#x2F;li&gt;
&lt;&#x2F;ul&gt;
&lt;h3 id=&quot;node-not-appearing-in-cluster&quot;&gt;Node Not Appearing in Cluster&lt;&#x2F;h3&gt;
&lt;ul&gt;
&lt;li&gt;Check &lt;code&gt;talosctl get discoveryconfig&lt;&#x2F;code&gt; - must show &lt;code&gt;discoveryEnabled: true&lt;&#x2F;code&gt;&lt;&#x2F;li&gt;
&lt;li&gt;Verify &lt;code&gt;cluster.secret&lt;&#x2F;code&gt; matches control plane&lt;&#x2F;li&gt;
&lt;li&gt;Check &lt;code&gt;talosctl get members&lt;&#x2F;code&gt; from the node itself&lt;&#x2F;li&gt;
&lt;&#x2F;ul&gt;
&lt;h3 id=&quot;kubelet-certificate-errors&quot;&gt;Kubelet Certificate Errors&lt;&#x2F;h3&gt;
&lt;ul&gt;
&lt;li&gt;Verify &lt;code&gt;cluster.ca.crt&lt;&#x2F;code&gt; matches the control plane’s certificate&lt;&#x2F;li&gt;
&lt;li&gt;Extract correct CA: &lt;code&gt;talosctl -n &amp;lt;CP_IP&amp;gt; get machineconfig -o yaml | grep -A1 &quot;cluster:&quot; | grep crt&lt;&#x2F;code&gt;&lt;&#x2F;li&gt;
&lt;&#x2F;ul&gt;
&lt;h3 id=&quot;boot-loop-at-bl31&quot;&gt;Boot Loop at BL31&lt;&#x2F;h3&gt;
&lt;ul&gt;
&lt;li&gt;Usually indicates 16K page kernel incompatibility&lt;&#x2F;li&gt;
&lt;li&gt;Rebuild with &lt;code&gt;CONFIG_ARM64_4K_PAGES=y&lt;&#x2F;code&gt;&lt;&#x2F;li&gt;
&lt;&#x2F;ul&gt;
&lt;h2 id=&quot;final-result&quot;&gt;Final Result&lt;&#x2F;h2&gt;
&lt;p&gt;After all this, I have a working RPi5 node in my Talos cluster:&lt;&#x2F;p&gt;
&lt;pre style=&quot;background-color:#2b303b;color:#c0c5ce;&quot;&gt;&lt;code&gt;&lt;span&gt;$ kubectl get nodes -o wide
&lt;&#x2F;span&gt;&lt;span&gt;NAME                  STATUS   ROLES           VERSION   INTERNAL-IP      KERNEL-VERSION
&lt;&#x2F;span&gt;&lt;span&gt;talos-192-168-150-8   Ready    &amp;lt;none&amp;gt;          v1.35.0   192.168.150.8    6.12.67-talos
&lt;&#x2F;span&gt;&lt;span&gt;talos-ek0-5dx         Ready    control-plane   v1.35.0   100.78.183.103   6.18.2-talos
&lt;&#x2F;span&gt;&lt;span&gt;talos-lwn-dba         Ready    &amp;lt;none&amp;gt;          v1.35.0   100.95.115.98    6.18.2-talos
&lt;&#x2F;span&gt;&lt;&#x2F;code&gt;&lt;&#x2F;pre&gt;
&lt;p&gt;The RPi5 is running the custom 6.12.67-talos kernel with 4K pages, working ethernet, and is fully participating in cluster workloads.&lt;&#x2F;p&gt;
&lt;h2 id=&quot;repository&quot;&gt;Repository&lt;&#x2F;h2&gt;
&lt;p&gt;The build configuration and documentation are available at:&lt;&#x2F;p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;Main repo:&lt;&#x2F;strong&gt; &lt;a href=&quot;https:&#x2F;&#x2F;github.com&#x2F;kcirtapfromspace&#x2F;talos-rpi5&quot;&gt;github.com&#x2F;kcirtapfromspace&#x2F;talos-rpi5&lt;&#x2F;a&gt;&lt;&#x2F;li&gt;
&lt;&#x2F;ul&gt;
&lt;h3 id=&quot;submodule-forks-rpi5-support-branch&quot;&gt;Submodule Forks (rpi5-support branch)&lt;&#x2F;h3&gt;
&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;pkgs&lt;&#x2F;strong&gt; (kernel config): &lt;a href=&quot;https:&#x2F;&#x2F;github.com&#x2F;kcirtapfromspace&#x2F;pkgs&#x2F;tree&#x2F;rpi5-support&quot;&gt;github.com&#x2F;kcirtapfromspace&#x2F;pkgs&lt;&#x2F;a&gt;&lt;&#x2F;li&gt;
&lt;li&gt;&lt;strong&gt;talos&lt;&#x2F;strong&gt; (OS changes): &lt;a href=&quot;https:&#x2F;&#x2F;github.com&#x2F;kcirtapfromspace&#x2F;talos&#x2F;tree&#x2F;rpi5-support&quot;&gt;github.com&#x2F;kcirtapfromspace&#x2F;talos&lt;&#x2F;a&gt;&lt;&#x2F;li&gt;
&lt;li&gt;&lt;strong&gt;sbc-raspberrypi5&lt;&#x2F;strong&gt; (overlay): &lt;a href=&quot;https:&#x2F;&#x2F;github.com&#x2F;kcirtapfromspace&#x2F;sbc-raspberrypi5&#x2F;tree&#x2F;rpi5-support&quot;&gt;github.com&#x2F;kcirtapfromspace&#x2F;sbc-raspberrypi5&lt;&#x2F;a&gt;&lt;&#x2F;li&gt;
&lt;&#x2F;ul&gt;
&lt;h2 id=&quot;references&quot;&gt;References&lt;&#x2F;h2&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href=&quot;https:&#x2F;&#x2F;www.talos.dev&#x2F;v1.12&#x2F;talos-guides&#x2F;install&#x2F;single-board-computers&#x2F;rpi_generic&#x2F;&quot;&gt;Talos Linux Documentation&lt;&#x2F;a&gt;&lt;&#x2F;li&gt;
&lt;li&gt;&lt;a href=&quot;https:&#x2F;&#x2F;github.com&#x2F;siderolabs&#x2F;sbc-raspberrypi5&quot;&gt;siderolabs&#x2F;sbc-raspberrypi5&lt;&#x2F;a&gt;&lt;&#x2F;li&gt;
&lt;li&gt;&lt;a href=&quot;https:&#x2F;&#x2F;github.com&#x2F;siderolabs&#x2F;pkgs&quot;&gt;siderolabs&#x2F;pkgs&lt;&#x2F;a&gt;&lt;&#x2F;li&gt;
&lt;li&gt;&lt;a href=&quot;https:&#x2F;&#x2F;datasheets.raspberrypi.com&#x2F;rp1&#x2F;rp1-peripherals.pdf&quot;&gt;RPi5 RP1 Peripheral Datasheet&lt;&#x2F;a&gt;&lt;&#x2F;li&gt;
&lt;&#x2F;ul&gt;
</description>
      </item>
      <item>
          <title>Matrix Homeserver with MAS, Keycloak &amp; Tailscale</title>
          <pubDate>Sun, 25 Jan 2026 00:00:00 +0000</pubDate>
          <author>Unknown</author>
          <link>https://kcirtapfromspace.github.io/kcirtap-blog/posts/matrix-mas-keycloak-tailscale-setup/</link>
          <guid>https://kcirtapfromspace.github.io/kcirtap-blog/posts/matrix-mas-keycloak-tailscale-setup/</guid>
          <description xml:base="https://kcirtapfromspace.github.io/kcirtap-blog/posts/matrix-mas-keycloak-tailscale-setup/">&lt;h1 id=&quot;matrix-homeserver-with-mas-keycloak-tailscale&quot;&gt;Matrix Homeserver with MAS, Keycloak &amp;amp; Tailscale&lt;&#x2F;h1&gt;
&lt;p&gt;A deep-dive into setting up a self-hosted Matrix homeserver with modern authentication, SSO via Keycloak, and QR code login support—all exposed securely through Tailscale Funnel.&lt;&#x2F;p&gt;
&lt;h2 id=&quot;the-goal&quot;&gt;The Goal&lt;&#x2F;h2&gt;
&lt;p&gt;Get a Matrix homeserver running on a Raspberry Pi that:&lt;&#x2F;p&gt;
&lt;ul&gt;
&lt;li&gt;Is publicly accessible via Tailscale Funnel&lt;&#x2F;li&gt;
&lt;li&gt;Uses Matrix Authentication Service (MAS) for modern OIDC-based auth&lt;&#x2F;li&gt;
&lt;li&gt;Integrates with Keycloak for SSO&lt;&#x2F;li&gt;
&lt;li&gt;Supports QR code login from Element iOS&#x2F;Android&lt;&#x2F;li&gt;
&lt;li&gt;Federates with other Matrix servers&lt;&#x2F;li&gt;
&lt;&#x2F;ul&gt;
&lt;h2 id=&quot;architecture-overview&quot;&gt;Architecture Overview&lt;&#x2F;h2&gt;
&lt;pre style=&quot;background-color:#2b303b;color:#c0c5ce;&quot;&gt;&lt;code&gt;&lt;span&gt;Internet
&lt;&#x2F;span&gt;&lt;span&gt;    │
&lt;&#x2F;span&gt;&lt;span&gt;    ▼
&lt;&#x2F;span&gt;&lt;span&gt;Tailscale Funnel (HTTPS :443)
&lt;&#x2F;span&gt;&lt;span&gt;    │
&lt;&#x2F;span&gt;&lt;span&gt;    ▼
&lt;&#x2F;span&gt;&lt;span&gt;┌─────────────────────────────────────────────────────────┐
&lt;&#x2F;span&gt;&lt;span&gt;│  Raspberry Pi (thinkmeshmatrix.tail16ecc2.ts.net)       │
&lt;&#x2F;span&gt;&lt;span&gt;│                                                         │
&lt;&#x2F;span&gt;&lt;span&gt;│  ┌─────────────┐     ┌─────────────┐                   │
&lt;&#x2F;span&gt;&lt;span&gt;│  │ nginx-proxy │────▶│   Synapse   │                   │
&lt;&#x2F;span&gt;&lt;span&gt;│  │   (:8090)   │     │   (:8008)   │                   │
&lt;&#x2F;span&gt;&lt;span&gt;│  └──────┬──────┘     └─────────────┘                   │
&lt;&#x2F;span&gt;&lt;span&gt;│         │                                               │
&lt;&#x2F;span&gt;&lt;span&gt;│         ├────────────▶┌─────────────┐                  │
&lt;&#x2F;span&gt;&lt;span&gt;│         │             │     MAS     │                   │
&lt;&#x2F;span&gt;&lt;span&gt;│         │             │   (:8080)   │                   │
&lt;&#x2F;span&gt;&lt;span&gt;│         │             └──────┬──────┘                   │
&lt;&#x2F;span&gt;&lt;span&gt;│         │                    │                          │
&lt;&#x2F;span&gt;&lt;span&gt;│         │             ┌──────▼──────┐                  │
&lt;&#x2F;span&gt;&lt;span&gt;│         │             │ mas-postgres │                  │
&lt;&#x2F;span&gt;&lt;span&gt;│         │             └─────────────┘                   │
&lt;&#x2F;span&gt;&lt;span&gt;│         │                                               │
&lt;&#x2F;span&gt;&lt;span&gt;│         └────────────▶┌─────────────┐                  │
&lt;&#x2F;span&gt;&lt;span&gt;│                       │  Keycloak   │                   │
&lt;&#x2F;span&gt;&lt;span&gt;│                       │   (:8080)   │                   │
&lt;&#x2F;span&gt;&lt;span&gt;│                       └─────────────┘                   │
&lt;&#x2F;span&gt;&lt;span&gt;└─────────────────────────────────────────────────────────┘
&lt;&#x2F;span&gt;&lt;&#x2F;code&gt;&lt;&#x2F;pre&gt;
&lt;h2 id=&quot;the-journey-and-the-hiccups&quot;&gt;The Journey (and the Hiccups)&lt;&#x2F;h2&gt;
&lt;h3 id=&quot;1-initial-tailscale-setup&quot;&gt;1. Initial Tailscale Setup&lt;&#x2F;h3&gt;
&lt;p&gt;Started by installing Tailscale directly on the Raspberry Pi running Synapse and enabling Funnel for public HTTPS access:&lt;&#x2F;p&gt;
&lt;pre data-lang=&quot;bash&quot; style=&quot;background-color:#2b303b;color:#c0c5ce;&quot; class=&quot;language-bash &quot;&gt;&lt;code class=&quot;language-bash&quot; data-lang=&quot;bash&quot;&gt;&lt;span style=&quot;color:#bf616a;&quot;&gt;tailscale&lt;&#x2F;span&gt;&lt;span&gt; up&lt;&#x2F;span&gt;&lt;span style=&quot;color:#bf616a;&quot;&gt; --ssh
&lt;&#x2F;span&gt;&lt;span style=&quot;color:#bf616a;&quot;&gt;tailscale&lt;&#x2F;span&gt;&lt;span&gt; funnel 8090
&lt;&#x2F;span&gt;&lt;&#x2F;code&gt;&lt;&#x2F;pre&gt;
&lt;p&gt;Changed the Pi’s hostname to &lt;code&gt;thinkmeshmatrix&lt;&#x2F;code&gt; to reflect its purpose.&lt;&#x2F;p&gt;
&lt;h3 id=&quot;2-federation-woes&quot;&gt;2. Federation Woes&lt;&#x2F;h3&gt;
&lt;p&gt;Initial server was configured with &lt;code&gt;server_name: matrix.local&lt;&#x2F;code&gt;—which obviously can’t federate since remote servers can’t verify it.&lt;&#x2F;p&gt;
&lt;p&gt;&lt;strong&gt;Hiccup #1&lt;&#x2F;strong&gt;: Empty &lt;code&gt;federation_domain_whitelist: []&lt;&#x2F;code&gt; was blocking ALL outbound federation. The fix was to comment out that line entirely.&lt;&#x2F;p&gt;
&lt;p&gt;&lt;strong&gt;Hiccup #2&lt;&#x2F;strong&gt;: Matrix federation uses port 8448 by default, but Tailscale Funnel serves on 443. Fixed by adding &lt;code&gt;serve_server_wellknown: true&lt;&#x2F;code&gt; to Synapse config, which tells remote servers to use port 443.&lt;&#x2F;p&gt;
&lt;p&gt;Had to reset the server with a new &lt;code&gt;server_name&lt;&#x2F;code&gt;:&lt;&#x2F;p&gt;
&lt;pre data-lang=&quot;yaml&quot; style=&quot;background-color:#2b303b;color:#c0c5ce;&quot; class=&quot;language-yaml &quot;&gt;&lt;code class=&quot;language-yaml&quot; data-lang=&quot;yaml&quot;&gt;&lt;span style=&quot;color:#bf616a;&quot;&gt;server_name&lt;&#x2F;span&gt;&lt;span&gt;: &amp;quot;&lt;&#x2F;span&gt;&lt;span style=&quot;color:#a3be8c;&quot;&gt;thinkmeshmatrix.tail16ecc2.ts.net&lt;&#x2F;span&gt;&lt;span&gt;&amp;quot;
&lt;&#x2F;span&gt;&lt;&#x2F;code&gt;&lt;&#x2F;pre&gt;
&lt;h3 id=&quot;3-setting-up-mas-matrix-authentication-service&quot;&gt;3. Setting Up MAS (Matrix Authentication Service)&lt;&#x2F;h3&gt;
&lt;p&gt;MAS provides modern OIDC-based authentication for Matrix, enabling features like:&lt;&#x2F;p&gt;
&lt;ul&gt;
&lt;li&gt;Delegated authentication to external providers&lt;&#x2F;li&gt;
&lt;li&gt;Fine-grained session management&lt;&#x2F;li&gt;
&lt;li&gt;QR code login (MSC4108)&lt;&#x2F;li&gt;
&lt;&#x2F;ul&gt;
&lt;p&gt;&lt;strong&gt;Hiccup #3&lt;&#x2F;strong&gt;: MAS requires PostgreSQL, not SQLite. Deployed a &lt;code&gt;postgres:16-alpine&lt;&#x2F;code&gt; container:&lt;&#x2F;p&gt;
&lt;pre data-lang=&quot;bash&quot; style=&quot;background-color:#2b303b;color:#c0c5ce;&quot; class=&quot;language-bash &quot;&gt;&lt;code class=&quot;language-bash&quot; data-lang=&quot;bash&quot;&gt;&lt;span style=&quot;color:#bf616a;&quot;&gt;docker&lt;&#x2F;span&gt;&lt;span&gt; run&lt;&#x2F;span&gt;&lt;span style=&quot;color:#bf616a;&quot;&gt; -d --name&lt;&#x2F;span&gt;&lt;span&gt; mas-postgres \
&lt;&#x2F;span&gt;&lt;span style=&quot;color:#bf616a;&quot;&gt;  --network&lt;&#x2F;span&gt;&lt;span&gt; matrix-net \
&lt;&#x2F;span&gt;&lt;span style=&quot;color:#bf616a;&quot;&gt;  -e&lt;&#x2F;span&gt;&lt;span&gt; POSTGRES_USER=mas \
&lt;&#x2F;span&gt;&lt;span style=&quot;color:#bf616a;&quot;&gt;  -e&lt;&#x2F;span&gt;&lt;span&gt; POSTGRES_PASSWORD=maspassword \
&lt;&#x2F;span&gt;&lt;span style=&quot;color:#bf616a;&quot;&gt;  -e&lt;&#x2F;span&gt;&lt;span&gt; POSTGRES_DB=mas \
&lt;&#x2F;span&gt;&lt;span style=&quot;color:#bf616a;&quot;&gt;  -v&lt;&#x2F;span&gt;&lt;span&gt; mas-postgres-data:&#x2F;var&#x2F;lib&#x2F;postgresql&#x2F;data \
&lt;&#x2F;span&gt;&lt;span style=&quot;color:#bf616a;&quot;&gt;  --restart&lt;&#x2F;span&gt;&lt;span&gt; unless-stopped \
&lt;&#x2F;span&gt;&lt;span&gt;  postgres:16-alpine
&lt;&#x2F;span&gt;&lt;&#x2F;code&gt;&lt;&#x2F;pre&gt;
&lt;h3 id=&quot;4-nginx-reverse-proxy-routing&quot;&gt;4. Nginx Reverse Proxy Routing&lt;&#x2F;h3&gt;
&lt;p&gt;The trickiest part was getting nginx to route requests correctly between Synapse, MAS, and Keycloak. Key insight: certain Matrix endpoints need to go to MAS, not Synapse:&lt;&#x2F;p&gt;
&lt;pre data-lang=&quot;nginx&quot; style=&quot;background-color:#2b303b;color:#c0c5ce;&quot; class=&quot;language-nginx &quot;&gt;&lt;code class=&quot;language-nginx&quot; data-lang=&quot;nginx&quot;&gt;&lt;span&gt;# MAS endpoints
&lt;&#x2F;span&gt;&lt;span&gt;location &#x2F;.well-known&#x2F;openid-configuration { proxy_pass http:&#x2F;&#x2F;mas; }
&lt;&#x2F;span&gt;&lt;span&gt;location &#x2F;oauth2&#x2F; { proxy_pass http:&#x2F;&#x2F;mas; }
&lt;&#x2F;span&gt;&lt;span&gt;location &#x2F;authorize { proxy_pass http:&#x2F;&#x2F;mas; }
&lt;&#x2F;span&gt;&lt;span&gt;location &#x2F;_matrix&#x2F;client&#x2F;v3&#x2F;login { proxy_pass http:&#x2F;&#x2F;mas; }
&lt;&#x2F;span&gt;&lt;span&gt;location &#x2F;complete-compat-sso&#x2F; { proxy_pass http:&#x2F;&#x2F;mas; }
&lt;&#x2F;span&gt;&lt;span&gt;
&lt;&#x2F;span&gt;&lt;span&gt;# Keycloak endpoints  
&lt;&#x2F;span&gt;&lt;span&gt;location &#x2F;realms&#x2F; { proxy_pass http:&#x2F;&#x2F;keycloak; }
&lt;&#x2F;span&gt;&lt;span&gt;
&lt;&#x2F;span&gt;&lt;span&gt;# Everything else to Synapse
&lt;&#x2F;span&gt;&lt;span&gt;location &#x2F; { proxy_pass http:&#x2F;&#x2F;synapse; }
&lt;&#x2F;span&gt;&lt;&#x2F;code&gt;&lt;&#x2F;pre&gt;
&lt;p&gt;&lt;strong&gt;Hiccup #4&lt;&#x2F;strong&gt;: Kept getting “No Such Resource” errors during SSO flow. Turned out several MAS endpoints were missing from nginx config: &lt;code&gt;&#x2F;complete-compat-sso&#x2F;&lt;&#x2F;code&gt;, &lt;code&gt;&#x2F;link&lt;&#x2F;code&gt;, &lt;code&gt;&#x2F;consent&lt;&#x2F;code&gt;, &lt;code&gt;&#x2F;device&lt;&#x2F;code&gt;.&lt;&#x2F;p&gt;
&lt;h3 id=&quot;5-keycloak-integration&quot;&gt;5. Keycloak Integration&lt;&#x2F;h3&gt;
&lt;p&gt;Set up Keycloak as the upstream identity provider for MAS. This allows using Keycloak’s user management and potentially federating with other identity providers later.&lt;&#x2F;p&gt;
&lt;p&gt;&lt;strong&gt;Hiccup #5&lt;&#x2F;strong&gt;: MAS was redirecting users to &lt;code&gt;http:&#x2F;&#x2F;keycloak:8080&#x2F;...&lt;&#x2F;code&gt; (internal Docker hostname) instead of the public URL. Fixed by:&lt;&#x2F;p&gt;
&lt;ol&gt;
&lt;li&gt;Adding Keycloak routes to nginx&lt;&#x2F;li&gt;
&lt;li&gt;Configuring Keycloak with &lt;code&gt;KC_HOSTNAME&lt;&#x2F;code&gt;:&lt;&#x2F;li&gt;
&lt;&#x2F;ol&gt;
&lt;pre data-lang=&quot;bash&quot; style=&quot;background-color:#2b303b;color:#c0c5ce;&quot; class=&quot;language-bash &quot;&gt;&lt;code class=&quot;language-bash&quot; data-lang=&quot;bash&quot;&gt;&lt;span style=&quot;color:#bf616a;&quot;&gt;docker&lt;&#x2F;span&gt;&lt;span&gt; run&lt;&#x2F;span&gt;&lt;span style=&quot;color:#bf616a;&quot;&gt; -d --name&lt;&#x2F;span&gt;&lt;span&gt; keycloak \
&lt;&#x2F;span&gt;&lt;span style=&quot;color:#bf616a;&quot;&gt;  -e&lt;&#x2F;span&gt;&lt;span&gt; KC_HOSTNAME=thinkmeshmatrix.tail16ecc2.ts.net \
&lt;&#x2F;span&gt;&lt;span style=&quot;color:#bf616a;&quot;&gt;  -e&lt;&#x2F;span&gt;&lt;span&gt; KC_HTTP_ENABLED=true \
&lt;&#x2F;span&gt;&lt;span style=&quot;color:#bf616a;&quot;&gt;  -e&lt;&#x2F;span&gt;&lt;span&gt; KC_PROXY_HEADERS=xforwarded \
&lt;&#x2F;span&gt;&lt;span&gt;  ...
&lt;&#x2F;span&gt;&lt;&#x2F;code&gt;&lt;&#x2F;pre&gt;
&lt;ol start=&quot;3&quot;&gt;
&lt;li&gt;Updating MAS config to use public URL for issuer&lt;&#x2F;li&gt;
&lt;&#x2F;ol&gt;
&lt;p&gt;&lt;strong&gt;Hiccup #6&lt;&#x2F;strong&gt;: MAS cached the old Keycloak discovery document. Had to change the provider ID to force a refresh.&lt;&#x2F;p&gt;
&lt;h3 id=&quot;6-user-provisioning-issues&quot;&gt;6. User Provisioning Issues&lt;&#x2F;h3&gt;
&lt;p&gt;&lt;strong&gt;Hiccup #7&lt;&#x2F;strong&gt;: “Localpart not available” error when trying to register via SSO. The user existed in Synapse’s database from a failed previous attempt, but not in MAS. Had to manually clean up:&lt;&#x2F;p&gt;
&lt;pre data-lang=&quot;bash&quot; style=&quot;background-color:#2b303b;color:#c0c5ce;&quot; class=&quot;language-bash &quot;&gt;&lt;code class=&quot;language-bash&quot; data-lang=&quot;bash&quot;&gt;&lt;span style=&quot;color:#65737e;&quot;&gt;# Stop Synapse
&lt;&#x2F;span&gt;&lt;span style=&quot;color:#bf616a;&quot;&gt;docker&lt;&#x2F;span&gt;&lt;span&gt; stop synapse
&lt;&#x2F;span&gt;&lt;span&gt;
&lt;&#x2F;span&gt;&lt;span style=&quot;color:#65737e;&quot;&gt;# Clean up orphaned user data
&lt;&#x2F;span&gt;&lt;span style=&quot;color:#bf616a;&quot;&gt;sudo&lt;&#x2F;span&gt;&lt;span&gt; sqlite3 &#x2F;home&#x2F;pi&#x2F;matrix-synapse&#x2F;homeserver.db \
&lt;&#x2F;span&gt;&lt;span&gt;  &amp;quot;&lt;&#x2F;span&gt;&lt;span style=&quot;color:#a3be8c;&quot;&gt;DELETE FROM users WHERE name = &amp;#39;@username:server&amp;#39;;&lt;&#x2F;span&gt;&lt;span&gt;&amp;quot;
&lt;&#x2F;span&gt;&lt;span style=&quot;color:#bf616a;&quot;&gt;sudo&lt;&#x2F;span&gt;&lt;span&gt; sqlite3 &#x2F;home&#x2F;pi&#x2F;matrix-synapse&#x2F;homeserver.db \
&lt;&#x2F;span&gt;&lt;span&gt;  &amp;quot;&lt;&#x2F;span&gt;&lt;span style=&quot;color:#a3be8c;&quot;&gt;DELETE FROM profiles WHERE full_user_id LIKE &amp;#39;%username%&amp;#39;;&lt;&#x2F;span&gt;&lt;span&gt;&amp;quot;
&lt;&#x2F;span&gt;&lt;span&gt;
&lt;&#x2F;span&gt;&lt;span style=&quot;color:#65737e;&quot;&gt;# Clean up MAS
&lt;&#x2F;span&gt;&lt;span style=&quot;color:#bf616a;&quot;&gt;docker&lt;&#x2F;span&gt;&lt;span&gt; exec mas-postgres psql&lt;&#x2F;span&gt;&lt;span style=&quot;color:#bf616a;&quot;&gt; -U&lt;&#x2F;span&gt;&lt;span&gt; mas&lt;&#x2F;span&gt;&lt;span style=&quot;color:#bf616a;&quot;&gt; -d&lt;&#x2F;span&gt;&lt;span&gt; mas&lt;&#x2F;span&gt;&lt;span style=&quot;color:#bf616a;&quot;&gt; -c &lt;&#x2F;span&gt;&lt;span&gt;\
&lt;&#x2F;span&gt;&lt;span&gt;  &amp;quot;&lt;&#x2F;span&gt;&lt;span style=&quot;color:#a3be8c;&quot;&gt;DELETE FROM upstream_oauth_authorization_sessions; DELETE FROM upstream_oauth_links;&lt;&#x2F;span&gt;&lt;span&gt;&amp;quot;
&lt;&#x2F;span&gt;&lt;span&gt;
&lt;&#x2F;span&gt;&lt;span style=&quot;color:#bf616a;&quot;&gt;docker&lt;&#x2F;span&gt;&lt;span&gt; start synapse
&lt;&#x2F;span&gt;&lt;&#x2F;code&gt;&lt;&#x2F;pre&gt;
&lt;h3 id=&quot;7-synapse-configuration-evolution&quot;&gt;7. Synapse Configuration Evolution&lt;&#x2F;h3&gt;
&lt;p&gt;&lt;strong&gt;Hiccup #8&lt;&#x2F;strong&gt;: Synapse 1.136+ deprecated &lt;code&gt;experimental_features.msc3861&lt;&#x2F;code&gt; in favor of a stable &lt;code&gt;matrix_authentication_service&lt;&#x2F;code&gt; config block:&lt;&#x2F;p&gt;
&lt;pre data-lang=&quot;yaml&quot; style=&quot;background-color:#2b303b;color:#c0c5ce;&quot; class=&quot;language-yaml &quot;&gt;&lt;code class=&quot;language-yaml&quot; data-lang=&quot;yaml&quot;&gt;&lt;span style=&quot;color:#65737e;&quot;&gt;# Old (deprecated)
&lt;&#x2F;span&gt;&lt;span style=&quot;color:#bf616a;&quot;&gt;experimental_features&lt;&#x2F;span&gt;&lt;span&gt;:
&lt;&#x2F;span&gt;&lt;span&gt;  &lt;&#x2F;span&gt;&lt;span style=&quot;color:#bf616a;&quot;&gt;msc3861&lt;&#x2F;span&gt;&lt;span&gt;:
&lt;&#x2F;span&gt;&lt;span&gt;    &lt;&#x2F;span&gt;&lt;span style=&quot;color:#bf616a;&quot;&gt;enabled&lt;&#x2F;span&gt;&lt;span&gt;: &lt;&#x2F;span&gt;&lt;span style=&quot;color:#d08770;&quot;&gt;true
&lt;&#x2F;span&gt;&lt;span&gt;    &lt;&#x2F;span&gt;&lt;span style=&quot;color:#bf616a;&quot;&gt;issuer&lt;&#x2F;span&gt;&lt;span&gt;: &lt;&#x2F;span&gt;&lt;span style=&quot;color:#a3be8c;&quot;&gt;https:&#x2F;&#x2F;...
&lt;&#x2F;span&gt;&lt;span&gt;    
&lt;&#x2F;span&gt;&lt;span style=&quot;color:#65737e;&quot;&gt;# New (stable)
&lt;&#x2F;span&gt;&lt;span style=&quot;color:#bf616a;&quot;&gt;matrix_authentication_service&lt;&#x2F;span&gt;&lt;span&gt;:
&lt;&#x2F;span&gt;&lt;span&gt;  &lt;&#x2F;span&gt;&lt;span style=&quot;color:#bf616a;&quot;&gt;enabled&lt;&#x2F;span&gt;&lt;span&gt;: &lt;&#x2F;span&gt;&lt;span style=&quot;color:#d08770;&quot;&gt;true
&lt;&#x2F;span&gt;&lt;span&gt;  &lt;&#x2F;span&gt;&lt;span style=&quot;color:#bf616a;&quot;&gt;endpoint&lt;&#x2F;span&gt;&lt;span&gt;: &lt;&#x2F;span&gt;&lt;span style=&quot;color:#a3be8c;&quot;&gt;http:&#x2F;&#x2F;mas:8080
&lt;&#x2F;span&gt;&lt;span&gt;  &lt;&#x2F;span&gt;&lt;span style=&quot;color:#bf616a;&quot;&gt;secret&lt;&#x2F;span&gt;&lt;span&gt;: &amp;quot;&lt;&#x2F;span&gt;&lt;span style=&quot;color:#a3be8c;&quot;&gt;shared-secret&lt;&#x2F;span&gt;&lt;span&gt;&amp;quot;
&lt;&#x2F;span&gt;&lt;&#x2F;code&gt;&lt;&#x2F;pre&gt;
&lt;h3 id=&quot;8-qr-code-login-msc4108&quot;&gt;8. QR Code Login (MSC4108)&lt;&#x2F;h3&gt;
&lt;p&gt;The final boss: getting QR code login working for Element iOS.&lt;&#x2F;p&gt;
&lt;p&gt;&lt;strong&gt;Hiccup #9&lt;&#x2F;strong&gt;: MSC4108 wasn’t being advertised. Had to add to Synapse config:&lt;&#x2F;p&gt;
&lt;pre data-lang=&quot;yaml&quot; style=&quot;background-color:#2b303b;color:#c0c5ce;&quot; class=&quot;language-yaml &quot;&gt;&lt;code class=&quot;language-yaml&quot; data-lang=&quot;yaml&quot;&gt;&lt;span style=&quot;color:#bf616a;&quot;&gt;experimental_features&lt;&#x2F;span&gt;&lt;span&gt;:
&lt;&#x2F;span&gt;&lt;span&gt;  &lt;&#x2F;span&gt;&lt;span style=&quot;color:#bf616a;&quot;&gt;msc4108_enabled&lt;&#x2F;span&gt;&lt;span&gt;: &lt;&#x2F;span&gt;&lt;span style=&quot;color:#d08770;&quot;&gt;true
&lt;&#x2F;span&gt;&lt;&#x2F;code&gt;&lt;&#x2F;pre&gt;
&lt;p&gt;&lt;strong&gt;Hiccup #10&lt;&#x2F;strong&gt;: The rendezvous endpoint was incorrectly routed to MAS instead of Synapse. Synapse has a built-in rendezvous server at &lt;code&gt;&#x2F;_synapse&#x2F;client&#x2F;rendezvous&lt;&#x2F;code&gt;. Removed the incorrect nginx route and let it fall through to Synapse.&lt;&#x2F;p&gt;
&lt;p&gt;Testing the rendezvous endpoint:&lt;&#x2F;p&gt;
&lt;pre data-lang=&quot;bash&quot; style=&quot;background-color:#2b303b;color:#c0c5ce;&quot; class=&quot;language-bash &quot;&gt;&lt;code class=&quot;language-bash&quot; data-lang=&quot;bash&quot;&gt;&lt;span style=&quot;color:#bf616a;&quot;&gt;curl -X&lt;&#x2F;span&gt;&lt;span&gt; POST &amp;quot;&lt;&#x2F;span&gt;&lt;span style=&quot;color:#a3be8c;&quot;&gt;https:&#x2F;&#x2F;server&#x2F;_matrix&#x2F;client&#x2F;unstable&#x2F;org.matrix.msc4108&#x2F;rendezvous&lt;&#x2F;span&gt;&lt;span&gt;&amp;quot; \
&lt;&#x2F;span&gt;&lt;span style=&quot;color:#bf616a;&quot;&gt;  -H &lt;&#x2F;span&gt;&lt;span&gt;&amp;quot;&lt;&#x2F;span&gt;&lt;span style=&quot;color:#a3be8c;&quot;&gt;Content-Type: text&#x2F;plain&lt;&#x2F;span&gt;&lt;span&gt;&amp;quot;&lt;&#x2F;span&gt;&lt;span style=&quot;color:#bf616a;&quot;&gt; -d &lt;&#x2F;span&gt;&lt;span&gt;&amp;#39;&lt;&#x2F;span&gt;&lt;span style=&quot;color:#a3be8c;&quot;&gt;test&lt;&#x2F;span&gt;&lt;span&gt;&amp;#39;
&lt;&#x2F;span&gt;&lt;span style=&quot;color:#65737e;&quot;&gt;# Returns: {&amp;quot;url&amp;quot;:&amp;quot;https:&#x2F;&#x2F;server&#x2F;_synapse&#x2F;client&#x2F;rendezvous&#x2F;SESSION_ID&amp;quot;}
&lt;&#x2F;span&gt;&lt;&#x2F;code&gt;&lt;&#x2F;pre&gt;
&lt;h2 id=&quot;final-configuration&quot;&gt;Final Configuration&lt;&#x2F;h2&gt;
&lt;h3 id=&quot;synapse-homeserver-yaml&quot;&gt;Synapse (&lt;code&gt;homeserver.yaml&lt;&#x2F;code&gt;)&lt;&#x2F;h3&gt;
&lt;pre data-lang=&quot;yaml&quot; style=&quot;background-color:#2b303b;color:#c0c5ce;&quot; class=&quot;language-yaml &quot;&gt;&lt;code class=&quot;language-yaml&quot; data-lang=&quot;yaml&quot;&gt;&lt;span style=&quot;color:#bf616a;&quot;&gt;server_name&lt;&#x2F;span&gt;&lt;span&gt;: &amp;quot;&lt;&#x2F;span&gt;&lt;span style=&quot;color:#a3be8c;&quot;&gt;thinkmeshmatrix.tail16ecc2.ts.net&lt;&#x2F;span&gt;&lt;span&gt;&amp;quot;
&lt;&#x2F;span&gt;&lt;span style=&quot;color:#bf616a;&quot;&gt;public_baseurl&lt;&#x2F;span&gt;&lt;span&gt;: &amp;quot;&lt;&#x2F;span&gt;&lt;span style=&quot;color:#a3be8c;&quot;&gt;https:&#x2F;&#x2F;thinkmeshmatrix.tail16ecc2.ts.net&#x2F;&lt;&#x2F;span&gt;&lt;span&gt;&amp;quot;
&lt;&#x2F;span&gt;&lt;span&gt;
&lt;&#x2F;span&gt;&lt;span style=&quot;color:#bf616a;&quot;&gt;serve_server_wellknown&lt;&#x2F;span&gt;&lt;span&gt;: &lt;&#x2F;span&gt;&lt;span style=&quot;color:#d08770;&quot;&gt;true
&lt;&#x2F;span&gt;&lt;span style=&quot;color:#bf616a;&quot;&gt;allow_public_rooms_over_federation&lt;&#x2F;span&gt;&lt;span&gt;: &lt;&#x2F;span&gt;&lt;span style=&quot;color:#d08770;&quot;&gt;true
&lt;&#x2F;span&gt;&lt;span&gt;
&lt;&#x2F;span&gt;&lt;span style=&quot;color:#bf616a;&quot;&gt;matrix_authentication_service&lt;&#x2F;span&gt;&lt;span&gt;:
&lt;&#x2F;span&gt;&lt;span&gt;  &lt;&#x2F;span&gt;&lt;span style=&quot;color:#bf616a;&quot;&gt;enabled&lt;&#x2F;span&gt;&lt;span&gt;: &lt;&#x2F;span&gt;&lt;span style=&quot;color:#d08770;&quot;&gt;true
&lt;&#x2F;span&gt;&lt;span&gt;  &lt;&#x2F;span&gt;&lt;span style=&quot;color:#bf616a;&quot;&gt;endpoint&lt;&#x2F;span&gt;&lt;span&gt;: &lt;&#x2F;span&gt;&lt;span style=&quot;color:#a3be8c;&quot;&gt;http:&#x2F;&#x2F;mas:8080
&lt;&#x2F;span&gt;&lt;span&gt;  &lt;&#x2F;span&gt;&lt;span style=&quot;color:#bf616a;&quot;&gt;secret&lt;&#x2F;span&gt;&lt;span&gt;: &amp;quot;&lt;&#x2F;span&gt;&lt;span style=&quot;color:#a3be8c;&quot;&gt;shared-secret&lt;&#x2F;span&gt;&lt;span&gt;&amp;quot;
&lt;&#x2F;span&gt;&lt;span&gt;  &lt;&#x2F;span&gt;&lt;span style=&quot;color:#bf616a;&quot;&gt;account_management_url&lt;&#x2F;span&gt;&lt;span&gt;: &amp;quot;&lt;&#x2F;span&gt;&lt;span style=&quot;color:#a3be8c;&quot;&gt;https:&#x2F;&#x2F;thinkmeshmatrix.tail16ecc2.ts.net&#x2F;account&lt;&#x2F;span&gt;&lt;span&gt;&amp;quot;
&lt;&#x2F;span&gt;&lt;span&gt;
&lt;&#x2F;span&gt;&lt;span style=&quot;color:#bf616a;&quot;&gt;experimental_features&lt;&#x2F;span&gt;&lt;span&gt;:
&lt;&#x2F;span&gt;&lt;span&gt;  &lt;&#x2F;span&gt;&lt;span style=&quot;color:#bf616a;&quot;&gt;msc4108_enabled&lt;&#x2F;span&gt;&lt;span&gt;: &lt;&#x2F;span&gt;&lt;span style=&quot;color:#d08770;&quot;&gt;true
&lt;&#x2F;span&gt;&lt;&#x2F;code&gt;&lt;&#x2F;pre&gt;
&lt;h3 id=&quot;mas-config-yaml&quot;&gt;MAS (&lt;code&gt;config.yaml&lt;&#x2F;code&gt;)&lt;&#x2F;h3&gt;
&lt;pre data-lang=&quot;yaml&quot; style=&quot;background-color:#2b303b;color:#c0c5ce;&quot; class=&quot;language-yaml &quot;&gt;&lt;code class=&quot;language-yaml&quot; data-lang=&quot;yaml&quot;&gt;&lt;span style=&quot;color:#bf616a;&quot;&gt;http&lt;&#x2F;span&gt;&lt;span&gt;:
&lt;&#x2F;span&gt;&lt;span&gt;  &lt;&#x2F;span&gt;&lt;span style=&quot;color:#bf616a;&quot;&gt;public_base&lt;&#x2F;span&gt;&lt;span&gt;: &lt;&#x2F;span&gt;&lt;span style=&quot;color:#a3be8c;&quot;&gt;https:&#x2F;&#x2F;thinkmeshmatrix.tail16ecc2.ts.net&#x2F;
&lt;&#x2F;span&gt;&lt;span&gt;  &lt;&#x2F;span&gt;&lt;span style=&quot;color:#bf616a;&quot;&gt;issuer&lt;&#x2F;span&gt;&lt;span&gt;: &lt;&#x2F;span&gt;&lt;span style=&quot;color:#a3be8c;&quot;&gt;https:&#x2F;&#x2F;thinkmeshmatrix.tail16ecc2.ts.net&#x2F;
&lt;&#x2F;span&gt;&lt;span&gt;
&lt;&#x2F;span&gt;&lt;span style=&quot;color:#bf616a;&quot;&gt;matrix&lt;&#x2F;span&gt;&lt;span&gt;:
&lt;&#x2F;span&gt;&lt;span&gt;  &lt;&#x2F;span&gt;&lt;span style=&quot;color:#bf616a;&quot;&gt;homeserver&lt;&#x2F;span&gt;&lt;span&gt;: &lt;&#x2F;span&gt;&lt;span style=&quot;color:#a3be8c;&quot;&gt;thinkmeshmatrix.tail16ecc2.ts.net
&lt;&#x2F;span&gt;&lt;span&gt;  &lt;&#x2F;span&gt;&lt;span style=&quot;color:#bf616a;&quot;&gt;endpoint&lt;&#x2F;span&gt;&lt;span&gt;: &lt;&#x2F;span&gt;&lt;span style=&quot;color:#a3be8c;&quot;&gt;http:&#x2F;&#x2F;synapse:8008
&lt;&#x2F;span&gt;&lt;span&gt;  &lt;&#x2F;span&gt;&lt;span style=&quot;color:#bf616a;&quot;&gt;secret&lt;&#x2F;span&gt;&lt;span&gt;: &amp;quot;&lt;&#x2F;span&gt;&lt;span style=&quot;color:#a3be8c;&quot;&gt;shared-secret&lt;&#x2F;span&gt;&lt;span&gt;&amp;quot;
&lt;&#x2F;span&gt;&lt;span&gt;
&lt;&#x2F;span&gt;&lt;span style=&quot;color:#bf616a;&quot;&gt;upstream_oauth2&lt;&#x2F;span&gt;&lt;span&gt;:
&lt;&#x2F;span&gt;&lt;span&gt;  &lt;&#x2F;span&gt;&lt;span style=&quot;color:#bf616a;&quot;&gt;providers&lt;&#x2F;span&gt;&lt;span&gt;:
&lt;&#x2F;span&gt;&lt;span&gt;  - &lt;&#x2F;span&gt;&lt;span style=&quot;color:#bf616a;&quot;&gt;id&lt;&#x2F;span&gt;&lt;span&gt;: &lt;&#x2F;span&gt;&lt;span style=&quot;color:#a3be8c;&quot;&gt;keycloak-provider
&lt;&#x2F;span&gt;&lt;span&gt;    &lt;&#x2F;span&gt;&lt;span style=&quot;color:#bf616a;&quot;&gt;issuer&lt;&#x2F;span&gt;&lt;span&gt;: &lt;&#x2F;span&gt;&lt;span style=&quot;color:#a3be8c;&quot;&gt;https:&#x2F;&#x2F;thinkmeshmatrix.tail16ecc2.ts.net&#x2F;realms&#x2F;matrix
&lt;&#x2F;span&gt;&lt;span&gt;    &lt;&#x2F;span&gt;&lt;span style=&quot;color:#bf616a;&quot;&gt;client_id&lt;&#x2F;span&gt;&lt;span&gt;: &lt;&#x2F;span&gt;&lt;span style=&quot;color:#a3be8c;&quot;&gt;mas
&lt;&#x2F;span&gt;&lt;span&gt;    &lt;&#x2F;span&gt;&lt;span style=&quot;color:#bf616a;&quot;&gt;client_secret&lt;&#x2F;span&gt;&lt;span&gt;: &amp;quot;&lt;&#x2F;span&gt;&lt;span style=&quot;color:#a3be8c;&quot;&gt;keycloak-client-secret&lt;&#x2F;span&gt;&lt;span&gt;&amp;quot;
&lt;&#x2F;span&gt;&lt;span&gt;    &lt;&#x2F;span&gt;&lt;span style=&quot;color:#bf616a;&quot;&gt;claims_imports&lt;&#x2F;span&gt;&lt;span&gt;:
&lt;&#x2F;span&gt;&lt;span&gt;      &lt;&#x2F;span&gt;&lt;span style=&quot;color:#bf616a;&quot;&gt;localpart&lt;&#x2F;span&gt;&lt;span&gt;:
&lt;&#x2F;span&gt;&lt;span&gt;        &lt;&#x2F;span&gt;&lt;span style=&quot;color:#bf616a;&quot;&gt;action&lt;&#x2F;span&gt;&lt;span&gt;: &lt;&#x2F;span&gt;&lt;span style=&quot;color:#a3be8c;&quot;&gt;require
&lt;&#x2F;span&gt;&lt;span&gt;        &lt;&#x2F;span&gt;&lt;span style=&quot;color:#bf616a;&quot;&gt;template&lt;&#x2F;span&gt;&lt;span&gt;: &amp;quot;&lt;&#x2F;span&gt;&lt;span style=&quot;color:#a3be8c;&quot;&gt;{{ user.preferred_username }}&lt;&#x2F;span&gt;&lt;span&gt;&amp;quot;
&lt;&#x2F;span&gt;&lt;span&gt;
&lt;&#x2F;span&gt;&lt;span style=&quot;color:#bf616a;&quot;&gt;experimental&lt;&#x2F;span&gt;&lt;span&gt;:
&lt;&#x2F;span&gt;&lt;span&gt;  &lt;&#x2F;span&gt;&lt;span style=&quot;color:#bf616a;&quot;&gt;qr_login&lt;&#x2F;span&gt;&lt;span&gt;:
&lt;&#x2F;span&gt;&lt;span&gt;    &lt;&#x2F;span&gt;&lt;span style=&quot;color:#bf616a;&quot;&gt;enabled&lt;&#x2F;span&gt;&lt;span&gt;: &lt;&#x2F;span&gt;&lt;span style=&quot;color:#d08770;&quot;&gt;true
&lt;&#x2F;span&gt;&lt;&#x2F;code&gt;&lt;&#x2F;pre&gt;
&lt;h2 id=&quot;data-persistence-recovery&quot;&gt;Data Persistence &amp;amp; Recovery&lt;&#x2F;h2&gt;
&lt;p&gt;All containers configured with:&lt;&#x2F;p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;code&gt;--restart unless-stopped&lt;&#x2F;code&gt; for automatic recovery after power outages&lt;&#x2F;li&gt;
&lt;li&gt;Proper volume mounts for data persistence:&lt;&#x2F;li&gt;
&lt;&#x2F;ul&gt;
&lt;table&gt;&lt;thead&gt;&lt;tr&gt;&lt;th&gt;Service&lt;&#x2F;th&gt;&lt;th&gt;Volume&lt;&#x2F;th&gt;&lt;&#x2F;tr&gt;&lt;&#x2F;thead&gt;&lt;tbody&gt;
&lt;tr&gt;&lt;td&gt;Synapse&lt;&#x2F;td&gt;&lt;td&gt;&lt;code&gt;&#x2F;home&#x2F;pi&#x2F;matrix-synapse&lt;&#x2F;code&gt; → &lt;code&gt;&#x2F;data&lt;&#x2F;code&gt;&lt;&#x2F;td&gt;&lt;&#x2F;tr&gt;
&lt;tr&gt;&lt;td&gt;MAS&lt;&#x2F;td&gt;&lt;td&gt;&lt;code&gt;&#x2F;home&#x2F;pi&#x2F;mas&lt;&#x2F;code&gt; → &lt;code&gt;&#x2F;config&lt;&#x2F;code&gt;&lt;&#x2F;td&gt;&lt;&#x2F;tr&gt;
&lt;tr&gt;&lt;td&gt;MAS-Postgres&lt;&#x2F;td&gt;&lt;td&gt;Docker volume &lt;code&gt;mas-postgres-data&lt;&#x2F;code&gt;&lt;&#x2F;td&gt;&lt;&#x2F;tr&gt;
&lt;tr&gt;&lt;td&gt;Keycloak&lt;&#x2F;td&gt;&lt;td&gt;&lt;code&gt;&#x2F;home&#x2F;pi&#x2F;keycloak-data&lt;&#x2F;code&gt; → &lt;code&gt;&#x2F;opt&#x2F;keycloak&#x2F;data&lt;&#x2F;code&gt;&lt;&#x2F;td&gt;&lt;&#x2F;tr&gt;
&lt;tr&gt;&lt;td&gt;nginx-proxy&lt;&#x2F;td&gt;&lt;td&gt;&lt;code&gt;&#x2F;home&#x2F;pi&#x2F;nginx-proxy&lt;&#x2F;code&gt; → &lt;code&gt;&#x2F;etc&#x2F;nginx&#x2F;conf.d&lt;&#x2F;code&gt;&lt;&#x2F;td&gt;&lt;&#x2F;tr&gt;
&lt;&#x2F;tbody&gt;&lt;&#x2F;table&gt;
&lt;h2 id=&quot;lessons-learned&quot;&gt;Lessons Learned&lt;&#x2F;h2&gt;
&lt;ol&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Docker networking is tricky&lt;&#x2F;strong&gt; - Internal hostnames (&lt;code&gt;keycloak:8080&lt;&#x2F;code&gt;) work for server-to-server communication but not for browser redirects. Always use public URLs in OAuth flows.&lt;&#x2F;p&gt;
&lt;&#x2F;li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;OIDC discovery caching&lt;&#x2F;strong&gt; - MAS caches discovery documents. When you change upstream provider URLs, you may need to force a refresh by changing provider IDs or restarting.&lt;&#x2F;p&gt;
&lt;&#x2F;li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Database cleanup matters&lt;&#x2F;strong&gt; - Failed registration attempts can leave orphaned records in Synapse’s database that block future attempts. Clean up both the &lt;code&gt;users&lt;&#x2F;code&gt; and &lt;code&gt;profiles&lt;&#x2F;code&gt; tables.&lt;&#x2F;p&gt;
&lt;&#x2F;li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;nginx routing order&lt;&#x2F;strong&gt; - More specific routes must come before catch-all routes. The MSC4108 rendezvous endpoint needs to go to Synapse, not MAS.&lt;&#x2F;p&gt;
&lt;&#x2F;li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Synapse evolves fast&lt;&#x2F;strong&gt; - Configuration that works in one version may be deprecated in the next. The move from &lt;code&gt;experimental_features.msc3861&lt;&#x2F;code&gt; to &lt;code&gt;matrix_authentication_service&lt;&#x2F;code&gt; caught me off guard.&lt;&#x2F;p&gt;
&lt;&#x2F;li&gt;
&lt;&#x2F;ol&gt;
&lt;h2 id=&quot;current-status&quot;&gt;Current Status&lt;&#x2F;h2&gt;
&lt;ul&gt;
&lt;li&gt;✅ Matrix homeserver running at &lt;code&gt;thinkmeshmatrix.tail16ecc2.ts.net&lt;&#x2F;code&gt;&lt;&#x2F;li&gt;
&lt;li&gt;✅ Federation working with matrix.org and other servers&lt;&#x2F;li&gt;
&lt;li&gt;✅ SSO via Keycloak operational&lt;&#x2F;li&gt;
&lt;li&gt;✅ QR code login (MSC4108) functional&lt;&#x2F;li&gt;
&lt;li&gt;✅ Automatic restart after power outages&lt;&#x2F;li&gt;
&lt;li&gt;✅ Data persistence across container restarts&lt;&#x2F;li&gt;
&lt;&#x2F;ul&gt;
&lt;h2 id=&quot;resources&quot;&gt;Resources&lt;&#x2F;h2&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href=&quot;https:&#x2F;&#x2F;element-hq.github.io&#x2F;matrix-authentication-service&#x2F;&quot;&gt;Matrix Authentication Service Docs&lt;&#x2F;a&gt;&lt;&#x2F;li&gt;
&lt;li&gt;&lt;a href=&quot;https:&#x2F;&#x2F;element-hq.github.io&#x2F;synapse&#x2F;latest&#x2F;usage&#x2F;configuration&#x2F;config_documentation.html&quot;&gt;Synapse Configuration Manual&lt;&#x2F;a&gt;&lt;&#x2F;li&gt;
&lt;li&gt;&lt;a href=&quot;https:&#x2F;&#x2F;github.com&#x2F;matrix-org&#x2F;matrix-spec-proposals&#x2F;pull&#x2F;4108&quot;&gt;MSC4108: QR Code Login&lt;&#x2F;a&gt;&lt;&#x2F;li&gt;
&lt;li&gt;&lt;a href=&quot;https:&#x2F;&#x2F;tailscale.com&#x2F;kb&#x2F;1223&#x2F;funnel&quot;&gt;Tailscale Funnel&lt;&#x2F;a&gt;&lt;&#x2F;li&gt;
&lt;&#x2F;ul&gt;
</description>
      </item>
      <item>
          <title>2Gbps Link Aggregation on Turing Pi BMC with 802.3ad LACP</title>
          <pubDate>Sun, 11 Jan 2026 00:00:00 -0700</pubDate>
          <author>Unknown</author>
          <link>https://kcirtapfromspace.github.io/kcirtap-blog/posts/turingpi-2gbps-lacp-bonding/</link>
          <guid>https://kcirtapfromspace.github.io/kcirtap-blog/posts/turingpi-2gbps-lacp-bonding/</guid>
          <description xml:base="https://kcirtapfromspace.github.io/kcirtap-blog/posts/turingpi-2gbps-lacp-bonding/">&lt;h1 id=&quot;2gbps-link-aggregation-on-turing-pi-bmc-with-802-3ad-lacp&quot;&gt;2Gbps Link Aggregation on Turing Pi BMC with 802.3ad LACP&lt;&#x2F;h1&gt;
&lt;p&gt;The Turing Pi 2.5 has two gigabit Ethernet ports (ge0 and ge1) on its BMC. By bonding these together with 802.3ad LACP, you can achieve true 2Gbps aggregate bandwidth for your cluster traffic. This post documents how I implemented this feature in custom firmware.&lt;&#x2F;p&gt;
&lt;h2 id=&quot;why-2gbps&quot;&gt;Why 2Gbps?&lt;&#x2F;h2&gt;
&lt;p&gt;In a homelab cluster, the BMC handles all network traffic for up to 4 compute nodes. With multiple nodes running workloads that generate significant network I&#x2F;O, a single gigabit link can become a bottleneck. Link aggregation doubles the available bandwidth and provides redundancy.&lt;&#x2F;p&gt;
&lt;h2 id=&quot;prerequisites&quot;&gt;Prerequisites&lt;&#x2F;h2&gt;
&lt;ul&gt;
&lt;li&gt;Turing Pi 2.5 board&lt;&#x2F;li&gt;
&lt;li&gt;Custom firmware with bonding support (or build your own)&lt;&#x2F;li&gt;
&lt;li&gt;Switch with LACP support (I used a UniFi USW Pro Max 24 PoE)&lt;&#x2F;li&gt;
&lt;li&gt;Two Ethernet cables connected to both BMC ports&lt;&#x2F;li&gt;
&lt;&#x2F;ul&gt;
&lt;h2 id=&quot;the-challenge&quot;&gt;The Challenge&lt;&#x2F;h2&gt;
&lt;p&gt;Getting 802.3ad LACP working on the Turing Pi BMC required solving several issues:&lt;&#x2F;p&gt;
&lt;h3 id=&quot;1-dsa-ports-share-the-same-mac-address&quot;&gt;1. DSA Ports Share the Same MAC Address&lt;&#x2F;h3&gt;
&lt;p&gt;The ge0 and ge1 interfaces are DSA (Distributed Switch Architecture) ports on a Realtek switch chip. By default, they share the same MAC address, which breaks bonding since Linux requires unique MACs for slave interfaces.&lt;&#x2F;p&gt;
&lt;h3 id=&quot;2-hardcoded-bonding-mode&quot;&gt;2. Hardcoded Bonding Mode&lt;&#x2F;h3&gt;
&lt;p&gt;The stock firmware’s &lt;code&gt;S00dsa&lt;&#x2F;code&gt; init script loads the bonding module with a hardcoded mode:&lt;&#x2F;p&gt;
&lt;pre data-lang=&quot;bash&quot; style=&quot;background-color:#2b303b;color:#c0c5ce;&quot; class=&quot;language-bash &quot;&gt;&lt;code class=&quot;language-bash&quot; data-lang=&quot;bash&quot;&gt;&lt;span style=&quot;color:#bf616a;&quot;&gt;modprobe&lt;&#x2F;span&gt;&lt;span&gt; bonding mode=balance-alb miimon=100
&lt;&#x2F;span&gt;&lt;&#x2F;code&gt;&lt;&#x2F;pre&gt;
&lt;p&gt;This prevents changing to 802.3ad mode at runtime.&lt;&#x2F;p&gt;
&lt;h3 id=&quot;3-mode-changes-don-t-persist&quot;&gt;3. Mode Changes Don’t Persist&lt;&#x2F;h3&gt;
&lt;p&gt;Even if you manually configure 802.3ad, the changes are lost on reboot because the module is loaded with balance-alb mode before the network configuration runs.&lt;&#x2F;p&gt;
&lt;h2 id=&quot;the-solution&quot;&gt;The Solution&lt;&#x2F;h2&gt;
&lt;p&gt;I created a proper bonding setup with these components:&lt;&#x2F;p&gt;
&lt;h3 id=&quot;architecture&quot;&gt;Architecture&lt;&#x2F;h3&gt;
&lt;pre style=&quot;background-color:#2b303b;color:#c0c5ce;&quot;&gt;&lt;code&gt;&lt;span&gt;Web Browser → BMC-UI (http:&#x2F;&#x2F;turingpi) → REST API → bmcd daemon
&lt;&#x2F;span&gt;&lt;span&gt;    → network_config.rs → &#x2F;etc&#x2F;bonding.conf → S45bonding → bond0
&lt;&#x2F;span&gt;&lt;&#x2F;code&gt;&lt;&#x2F;pre&gt;
&lt;h3 id=&quot;key-changes&quot;&gt;Key Changes&lt;&#x2F;h3&gt;
&lt;ol&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Fixed S00dsa&lt;&#x2F;strong&gt;: Removed the hardcoded mode from modprobe:&lt;&#x2F;p&gt;
&lt;pre data-lang=&quot;bash&quot; style=&quot;background-color:#2b303b;color:#c0c5ce;&quot; class=&quot;language-bash &quot;&gt;&lt;code class=&quot;language-bash&quot; data-lang=&quot;bash&quot;&gt;&lt;span style=&quot;color:#bf616a;&quot;&gt;modprobe&lt;&#x2F;span&gt;&lt;span&gt; bonding  &lt;&#x2F;span&gt;&lt;span style=&quot;color:#65737e;&quot;&gt;# No mode specified
&lt;&#x2F;span&gt;&lt;&#x2F;code&gt;&lt;&#x2F;pre&gt;
&lt;&#x2F;li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Created S45bonding&lt;&#x2F;strong&gt;: A new init script that runs after network init and properly configures bonding:&lt;&#x2F;p&gt;
&lt;ul&gt;
&lt;li&gt;Reads mode from &lt;code&gt;&#x2F;etc&#x2F;bonding.conf&lt;&#x2F;code&gt;&lt;&#x2F;li&gt;
&lt;li&gt;Sets unique MAC on ge1 before enslaving&lt;&#x2F;li&gt;
&lt;li&gt;Deletes and recreates bond0 with the correct mode&lt;&#x2F;li&gt;
&lt;li&gt;Adds bond0 to the br0 bridge&lt;&#x2F;li&gt;
&lt;&#x2F;ul&gt;
&lt;&#x2F;li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;REST API Integration&lt;&#x2F;strong&gt;: Added &lt;code&gt;network_config.rs&lt;&#x2F;code&gt; module to bmcd for web UI control&lt;&#x2F;p&gt;
&lt;&#x2F;li&gt;
&lt;&#x2F;ol&gt;
&lt;h2 id=&quot;configuration-files&quot;&gt;Configuration Files&lt;&#x2F;h2&gt;
&lt;h3 id=&quot;etc-bonding-conf&quot;&gt;&#x2F;etc&#x2F;bonding.conf&lt;&#x2F;h3&gt;
&lt;p&gt;Contains the bonding mode (one of the supported modes):&lt;&#x2F;p&gt;
&lt;pre style=&quot;background-color:#2b303b;color:#c0c5ce;&quot;&gt;&lt;code&gt;&lt;span&gt;active-backup
&lt;&#x2F;span&gt;&lt;&#x2F;code&gt;&lt;&#x2F;pre&gt;
&lt;p&gt;The default is &lt;code&gt;active-backup&lt;&#x2F;code&gt; for safety - it works without any switch configuration. Change to &lt;code&gt;802.3ad&lt;&#x2F;code&gt; for LACP once your switch is configured.&lt;&#x2F;p&gt;
&lt;h3 id=&quot;etc-bonding-enabled&quot;&gt;&#x2F;etc&#x2F;bonding.enabled&lt;&#x2F;h3&gt;
&lt;p&gt;Empty marker file that enables bonding when present.&lt;&#x2F;p&gt;
&lt;h3 id=&quot;s45bonding-init-script&quot;&gt;S45bonding Init Script&lt;&#x2F;h3&gt;
&lt;pre data-lang=&quot;bash&quot; style=&quot;background-color:#2b303b;color:#c0c5ce;&quot; class=&quot;language-bash &quot;&gt;&lt;code class=&quot;language-bash&quot; data-lang=&quot;bash&quot;&gt;&lt;span style=&quot;color:#65737e;&quot;&gt;#!&#x2F;bin&#x2F;sh
&lt;&#x2F;span&gt;&lt;span&gt;
&lt;&#x2F;span&gt;&lt;span style=&quot;color:#bf616a;&quot;&gt;BONDING_CONF&lt;&#x2F;span&gt;&lt;span&gt;=&amp;quot;&lt;&#x2F;span&gt;&lt;span style=&quot;color:#a3be8c;&quot;&gt;&#x2F;etc&#x2F;bonding.conf&lt;&#x2F;span&gt;&lt;span&gt;&amp;quot;
&lt;&#x2F;span&gt;&lt;span style=&quot;color:#bf616a;&quot;&gt;BONDING_ENABLED&lt;&#x2F;span&gt;&lt;span&gt;=&amp;quot;&lt;&#x2F;span&gt;&lt;span style=&quot;color:#a3be8c;&quot;&gt;&#x2F;etc&#x2F;bonding.enabled&lt;&#x2F;span&gt;&lt;span&gt;&amp;quot;
&lt;&#x2F;span&gt;&lt;span&gt;
&lt;&#x2F;span&gt;&lt;span style=&quot;color:#8fa1b3;&quot;&gt;get_bonding_mode&lt;&#x2F;span&gt;&lt;span&gt;() {
&lt;&#x2F;span&gt;&lt;span&gt;    &lt;&#x2F;span&gt;&lt;span style=&quot;color:#b48ead;&quot;&gt;if &lt;&#x2F;span&gt;&lt;span style=&quot;color:#96b5b4;&quot;&gt;[ &lt;&#x2F;span&gt;&lt;span style=&quot;color:#bf616a;&quot;&gt;-f &lt;&#x2F;span&gt;&lt;span&gt;&amp;quot;$&lt;&#x2F;span&gt;&lt;span style=&quot;color:#bf616a;&quot;&gt;BONDING_CONF&lt;&#x2F;span&gt;&lt;span&gt;&amp;quot; &lt;&#x2F;span&gt;&lt;span style=&quot;color:#96b5b4;&quot;&gt;]&lt;&#x2F;span&gt;&lt;span&gt;; &lt;&#x2F;span&gt;&lt;span style=&quot;color:#b48ead;&quot;&gt;then
&lt;&#x2F;span&gt;&lt;span&gt;        &lt;&#x2F;span&gt;&lt;span style=&quot;color:#bf616a;&quot;&gt;cat &lt;&#x2F;span&gt;&lt;span&gt;&amp;quot;$&lt;&#x2F;span&gt;&lt;span style=&quot;color:#bf616a;&quot;&gt;BONDING_CONF&lt;&#x2F;span&gt;&lt;span&gt;&amp;quot; &lt;&#x2F;span&gt;&lt;span style=&quot;color:#d08770;&quot;&gt;2&lt;&#x2F;span&gt;&lt;span&gt;&amp;gt;&#x2F;dev&#x2F;null | &lt;&#x2F;span&gt;&lt;span style=&quot;color:#bf616a;&quot;&gt;head -1 &lt;&#x2F;span&gt;&lt;span&gt;| &lt;&#x2F;span&gt;&lt;span style=&quot;color:#bf616a;&quot;&gt;tr -d &lt;&#x2F;span&gt;&lt;span&gt;&amp;#39;&lt;&#x2F;span&gt;&lt;span style=&quot;color:#a3be8c;&quot;&gt;[:space:]&lt;&#x2F;span&gt;&lt;span&gt;&amp;#39;
&lt;&#x2F;span&gt;&lt;span&gt;    &lt;&#x2F;span&gt;&lt;span style=&quot;color:#b48ead;&quot;&gt;else
&lt;&#x2F;span&gt;&lt;span&gt;        &lt;&#x2F;span&gt;&lt;span style=&quot;color:#96b5b4;&quot;&gt;echo &lt;&#x2F;span&gt;&lt;span&gt;&amp;quot;&lt;&#x2F;span&gt;&lt;span style=&quot;color:#a3be8c;&quot;&gt;active-backup&lt;&#x2F;span&gt;&lt;span&gt;&amp;quot;
&lt;&#x2F;span&gt;&lt;span&gt;    &lt;&#x2F;span&gt;&lt;span style=&quot;color:#b48ead;&quot;&gt;fi
&lt;&#x2F;span&gt;&lt;span&gt;}
&lt;&#x2F;span&gt;&lt;span&gt;
&lt;&#x2F;span&gt;&lt;span style=&quot;color:#8fa1b3;&quot;&gt;start_bonding&lt;&#x2F;span&gt;&lt;span&gt;() {
&lt;&#x2F;span&gt;&lt;span&gt;    &lt;&#x2F;span&gt;&lt;span style=&quot;color:#b48ead;&quot;&gt;if &lt;&#x2F;span&gt;&lt;span style=&quot;color:#96b5b4;&quot;&gt;[ &lt;&#x2F;span&gt;&lt;span&gt;! &lt;&#x2F;span&gt;&lt;span style=&quot;color:#bf616a;&quot;&gt;-f &lt;&#x2F;span&gt;&lt;span&gt;&amp;quot;$&lt;&#x2F;span&gt;&lt;span style=&quot;color:#bf616a;&quot;&gt;BONDING_ENABLED&lt;&#x2F;span&gt;&lt;span&gt;&amp;quot; &lt;&#x2F;span&gt;&lt;span style=&quot;color:#96b5b4;&quot;&gt;]&lt;&#x2F;span&gt;&lt;span&gt;; &lt;&#x2F;span&gt;&lt;span style=&quot;color:#b48ead;&quot;&gt;then
&lt;&#x2F;span&gt;&lt;span&gt;        &lt;&#x2F;span&gt;&lt;span style=&quot;color:#b48ead;&quot;&gt;return&lt;&#x2F;span&gt;&lt;span&gt; 0
&lt;&#x2F;span&gt;&lt;span&gt;    &lt;&#x2F;span&gt;&lt;span style=&quot;color:#b48ead;&quot;&gt;fi
&lt;&#x2F;span&gt;&lt;span&gt;
&lt;&#x2F;span&gt;&lt;span&gt;    &lt;&#x2F;span&gt;&lt;span style=&quot;color:#bf616a;&quot;&gt;MODE&lt;&#x2F;span&gt;&lt;span&gt;=$&lt;&#x2F;span&gt;&lt;span style=&quot;color:#a3be8c;&quot;&gt;(&lt;&#x2F;span&gt;&lt;span style=&quot;color:#bf616a;&quot;&gt;get_bonding_mode&lt;&#x2F;span&gt;&lt;span style=&quot;color:#a3be8c;&quot;&gt;)
&lt;&#x2F;span&gt;&lt;span&gt;
&lt;&#x2F;span&gt;&lt;span&gt;    &lt;&#x2F;span&gt;&lt;span style=&quot;color:#65737e;&quot;&gt;# Remove interfaces from bridge
&lt;&#x2F;span&gt;&lt;span&gt;    &lt;&#x2F;span&gt;&lt;span style=&quot;color:#bf616a;&quot;&gt;ip&lt;&#x2F;span&gt;&lt;span&gt; link set ge0 nomaster &lt;&#x2F;span&gt;&lt;span style=&quot;color:#d08770;&quot;&gt;2&lt;&#x2F;span&gt;&lt;span&gt;&amp;gt;&#x2F;dev&#x2F;null
&lt;&#x2F;span&gt;&lt;span&gt;    &lt;&#x2F;span&gt;&lt;span style=&quot;color:#bf616a;&quot;&gt;ip&lt;&#x2F;span&gt;&lt;span&gt; link set ge1 nomaster &lt;&#x2F;span&gt;&lt;span style=&quot;color:#d08770;&quot;&gt;2&lt;&#x2F;span&gt;&lt;span&gt;&amp;gt;&#x2F;dev&#x2F;null
&lt;&#x2F;span&gt;&lt;span&gt;    &lt;&#x2F;span&gt;&lt;span style=&quot;color:#bf616a;&quot;&gt;ip&lt;&#x2F;span&gt;&lt;span&gt; link set ge0 down
&lt;&#x2F;span&gt;&lt;span&gt;    &lt;&#x2F;span&gt;&lt;span style=&quot;color:#bf616a;&quot;&gt;ip&lt;&#x2F;span&gt;&lt;span&gt; link set ge1 down
&lt;&#x2F;span&gt;&lt;span&gt;
&lt;&#x2F;span&gt;&lt;span&gt;    &lt;&#x2F;span&gt;&lt;span style=&quot;color:#65737e;&quot;&gt;# Set unique MAC on ge1 (DSA ports share same MAC by default)
&lt;&#x2F;span&gt;&lt;span&gt;    &lt;&#x2F;span&gt;&lt;span style=&quot;color:#bf616a;&quot;&gt;BASE_MAC&lt;&#x2F;span&gt;&lt;span&gt;=$&lt;&#x2F;span&gt;&lt;span style=&quot;color:#a3be8c;&quot;&gt;(&lt;&#x2F;span&gt;&lt;span style=&quot;color:#bf616a;&quot;&gt;cat&lt;&#x2F;span&gt;&lt;span style=&quot;color:#a3be8c;&quot;&gt; &#x2F;sys&#x2F;class&#x2F;net&#x2F;ge0&#x2F;address &lt;&#x2F;span&gt;&lt;span style=&quot;color:#d08770;&quot;&gt;2&lt;&#x2F;span&gt;&lt;span&gt;&amp;gt;&lt;&#x2F;span&gt;&lt;span style=&quot;color:#a3be8c;&quot;&gt;&#x2F;dev&#x2F;null)
&lt;&#x2F;span&gt;&lt;span&gt;    &lt;&#x2F;span&gt;&lt;span style=&quot;color:#bf616a;&quot;&gt;GE1_MAC&lt;&#x2F;span&gt;&lt;span&gt;=$&lt;&#x2F;span&gt;&lt;span style=&quot;color:#a3be8c;&quot;&gt;(&lt;&#x2F;span&gt;&lt;span style=&quot;color:#96b5b4;&quot;&gt;echo &lt;&#x2F;span&gt;&lt;span&gt;&amp;quot;$&lt;&#x2F;span&gt;&lt;span style=&quot;color:#bf616a;&quot;&gt;BASE_MAC&lt;&#x2F;span&gt;&lt;span&gt;&amp;quot; | &lt;&#x2F;span&gt;&lt;span style=&quot;color:#bf616a;&quot;&gt;awk -F&lt;&#x2F;span&gt;&lt;span style=&quot;color:#a3be8c;&quot;&gt;: &lt;&#x2F;span&gt;&lt;span&gt;&amp;#39;&lt;&#x2F;span&gt;&lt;span style=&quot;color:#a3be8c;&quot;&gt;{
&lt;&#x2F;span&gt;&lt;span style=&quot;color:#a3be8c;&quot;&gt;        last = (&amp;quot;0x&amp;quot; $6) + 1;
&lt;&#x2F;span&gt;&lt;span style=&quot;color:#a3be8c;&quot;&gt;        printf &amp;quot;%s:%s:%s:%s:%s:%02x&amp;quot;, $1, $2, $3, $4, $5, last
&lt;&#x2F;span&gt;&lt;span style=&quot;color:#a3be8c;&quot;&gt;    }&lt;&#x2F;span&gt;&lt;span&gt;&amp;#39;&lt;&#x2F;span&gt;&lt;span style=&quot;color:#a3be8c;&quot;&gt;)
&lt;&#x2F;span&gt;&lt;span&gt;    &lt;&#x2F;span&gt;&lt;span style=&quot;color:#bf616a;&quot;&gt;ip&lt;&#x2F;span&gt;&lt;span&gt; link set ge1 address &amp;quot;$&lt;&#x2F;span&gt;&lt;span style=&quot;color:#bf616a;&quot;&gt;GE1_MAC&lt;&#x2F;span&gt;&lt;span&gt;&amp;quot;
&lt;&#x2F;span&gt;&lt;span&gt;
&lt;&#x2F;span&gt;&lt;span&gt;    &lt;&#x2F;span&gt;&lt;span style=&quot;color:#65737e;&quot;&gt;# Delete existing bond and recreate with correct mode
&lt;&#x2F;span&gt;&lt;span&gt;    &lt;&#x2F;span&gt;&lt;span style=&quot;color:#bf616a;&quot;&gt;ip&lt;&#x2F;span&gt;&lt;span&gt; link del bond0 &lt;&#x2F;span&gt;&lt;span style=&quot;color:#d08770;&quot;&gt;2&lt;&#x2F;span&gt;&lt;span&gt;&amp;gt;&#x2F;dev&#x2F;null
&lt;&#x2F;span&gt;&lt;span&gt;    &lt;&#x2F;span&gt;&lt;span style=&quot;color:#bf616a;&quot;&gt;ip&lt;&#x2F;span&gt;&lt;span&gt; link add bond0 type bond mode &amp;quot;$&lt;&#x2F;span&gt;&lt;span style=&quot;color:#bf616a;&quot;&gt;MODE&lt;&#x2F;span&gt;&lt;span&gt;&amp;quot; miimon 100
&lt;&#x2F;span&gt;&lt;span&gt;
&lt;&#x2F;span&gt;&lt;span&gt;    &lt;&#x2F;span&gt;&lt;span style=&quot;color:#65737e;&quot;&gt;# For 802.3ad, set LACP rate to fast
&lt;&#x2F;span&gt;&lt;span&gt;    &lt;&#x2F;span&gt;&lt;span style=&quot;color:#96b5b4;&quot;&gt;[ &lt;&#x2F;span&gt;&lt;span&gt;&amp;quot;$&lt;&#x2F;span&gt;&lt;span style=&quot;color:#bf616a;&quot;&gt;MODE&lt;&#x2F;span&gt;&lt;span&gt;&amp;quot; = &amp;quot;&lt;&#x2F;span&gt;&lt;span style=&quot;color:#a3be8c;&quot;&gt;802.3ad&lt;&#x2F;span&gt;&lt;span&gt;&amp;quot; &lt;&#x2F;span&gt;&lt;span style=&quot;color:#96b5b4;&quot;&gt;] &lt;&#x2F;span&gt;&lt;span&gt;&amp;amp;&amp;amp; &lt;&#x2F;span&gt;&lt;span style=&quot;color:#96b5b4;&quot;&gt;echo&lt;&#x2F;span&gt;&lt;span&gt; fast &amp;gt; &#x2F;sys&#x2F;class&#x2F;net&#x2F;bond0&#x2F;bonding&#x2F;lacp_rate
&lt;&#x2F;span&gt;&lt;span&gt;
&lt;&#x2F;span&gt;&lt;span&gt;    &lt;&#x2F;span&gt;&lt;span style=&quot;color:#65737e;&quot;&gt;# Enslave interfaces
&lt;&#x2F;span&gt;&lt;span&gt;    &lt;&#x2F;span&gt;&lt;span style=&quot;color:#bf616a;&quot;&gt;ip&lt;&#x2F;span&gt;&lt;span&gt; link set ge0 master bond0
&lt;&#x2F;span&gt;&lt;span&gt;    &lt;&#x2F;span&gt;&lt;span style=&quot;color:#bf616a;&quot;&gt;ip&lt;&#x2F;span&gt;&lt;span&gt; link set ge1 master bond0
&lt;&#x2F;span&gt;&lt;span&gt;    &lt;&#x2F;span&gt;&lt;span style=&quot;color:#bf616a;&quot;&gt;ip&lt;&#x2F;span&gt;&lt;span&gt; link set ge0 up
&lt;&#x2F;span&gt;&lt;span&gt;    &lt;&#x2F;span&gt;&lt;span style=&quot;color:#bf616a;&quot;&gt;ip&lt;&#x2F;span&gt;&lt;span&gt; link set ge1 up
&lt;&#x2F;span&gt;&lt;span&gt;    &lt;&#x2F;span&gt;&lt;span style=&quot;color:#bf616a;&quot;&gt;ip&lt;&#x2F;span&gt;&lt;span&gt; link set bond0 up
&lt;&#x2F;span&gt;&lt;span&gt;    &lt;&#x2F;span&gt;&lt;span style=&quot;color:#bf616a;&quot;&gt;ip&lt;&#x2F;span&gt;&lt;span&gt; link set bond0 master br0
&lt;&#x2F;span&gt;&lt;span&gt;}
&lt;&#x2F;span&gt;&lt;&#x2F;code&gt;&lt;&#x2F;pre&gt;
&lt;h2 id=&quot;supported-bonding-modes&quot;&gt;Supported Bonding Modes&lt;&#x2F;h2&gt;
&lt;table&gt;&lt;thead&gt;&lt;tr&gt;&lt;th&gt;Mode&lt;&#x2F;th&gt;&lt;th&gt;Name&lt;&#x2F;th&gt;&lt;th&gt;Switch Config&lt;&#x2F;th&gt;&lt;th&gt;Description&lt;&#x2F;th&gt;&lt;&#x2F;tr&gt;&lt;&#x2F;thead&gt;&lt;tbody&gt;
&lt;tr&gt;&lt;td&gt;0&lt;&#x2F;td&gt;&lt;td&gt;balance-rr&lt;&#x2F;td&gt;&lt;td&gt;Aggregate&lt;&#x2F;td&gt;&lt;td&gt;Round-robin&lt;&#x2F;td&gt;&lt;&#x2F;tr&gt;
&lt;tr&gt;&lt;td&gt;1&lt;&#x2F;td&gt;&lt;td&gt;active-backup&lt;&#x2F;td&gt;&lt;td&gt;None&lt;&#x2F;td&gt;&lt;td&gt;Failover only&lt;&#x2F;td&gt;&lt;&#x2F;tr&gt;
&lt;tr&gt;&lt;td&gt;2&lt;&#x2F;td&gt;&lt;td&gt;balance-xor&lt;&#x2F;td&gt;&lt;td&gt;Aggregate&lt;&#x2F;td&gt;&lt;td&gt;XOR hashing&lt;&#x2F;td&gt;&lt;&#x2F;tr&gt;
&lt;tr&gt;&lt;td&gt;3&lt;&#x2F;td&gt;&lt;td&gt;broadcast&lt;&#x2F;td&gt;&lt;td&gt;None&lt;&#x2F;td&gt;&lt;td&gt;All slaves transmit&lt;&#x2F;td&gt;&lt;&#x2F;tr&gt;
&lt;tr&gt;&lt;td&gt;4&lt;&#x2F;td&gt;&lt;td&gt;802.3ad&lt;&#x2F;td&gt;&lt;td&gt;LACP&lt;&#x2F;td&gt;&lt;td&gt;IEEE LACP&lt;&#x2F;td&gt;&lt;&#x2F;tr&gt;
&lt;tr&gt;&lt;td&gt;5&lt;&#x2F;td&gt;&lt;td&gt;balance-tlb&lt;&#x2F;td&gt;&lt;td&gt;None&lt;&#x2F;td&gt;&lt;td&gt;Transmit load balancing&lt;&#x2F;td&gt;&lt;&#x2F;tr&gt;
&lt;tr&gt;&lt;td&gt;6&lt;&#x2F;td&gt;&lt;td&gt;balance-alb&lt;&#x2F;td&gt;&lt;td&gt;None&lt;&#x2F;td&gt;&lt;td&gt;Adaptive load balancing&lt;&#x2F;td&gt;&lt;&#x2F;tr&gt;
&lt;&#x2F;tbody&gt;&lt;&#x2F;table&gt;
&lt;h2 id=&quot;switch-configuration&quot;&gt;Switch Configuration&lt;&#x2F;h2&gt;
&lt;p&gt;For 802.3ad LACP, your switch must be configured to expect LACP on those ports. On UniFi:&lt;&#x2F;p&gt;
&lt;ol&gt;
&lt;li&gt;Go to Devices → Your Switch → Ports&lt;&#x2F;li&gt;
&lt;li&gt;Select the two ports connected to the Turing Pi&lt;&#x2F;li&gt;
&lt;li&gt;Create an Aggregate with LACP mode enabled&lt;&#x2F;li&gt;
&lt;&#x2F;ol&gt;
&lt;h2 id=&quot;verification&quot;&gt;Verification&lt;&#x2F;h2&gt;
&lt;p&gt;Check the bond status:&lt;&#x2F;p&gt;
&lt;pre data-lang=&quot;bash&quot; style=&quot;background-color:#2b303b;color:#c0c5ce;&quot; class=&quot;language-bash &quot;&gt;&lt;code class=&quot;language-bash&quot; data-lang=&quot;bash&quot;&gt;&lt;span style=&quot;color:#bf616a;&quot;&gt;cat&lt;&#x2F;span&gt;&lt;span&gt; &#x2F;proc&#x2F;net&#x2F;bonding&#x2F;bond0
&lt;&#x2F;span&gt;&lt;&#x2F;code&gt;&lt;&#x2F;pre&gt;
&lt;p&gt;Expected output:&lt;&#x2F;p&gt;
&lt;pre style=&quot;background-color:#2b303b;color:#c0c5ce;&quot;&gt;&lt;code&gt;&lt;span&gt;Ethernet Channel Bonding Driver: v6.8.12
&lt;&#x2F;span&gt;&lt;span&gt;
&lt;&#x2F;span&gt;&lt;span&gt;Bonding Mode: IEEE 802.3ad Dynamic link aggregation
&lt;&#x2F;span&gt;&lt;span&gt;Transmit Hash Policy: layer2 (0)
&lt;&#x2F;span&gt;&lt;span&gt;MII Status: up
&lt;&#x2F;span&gt;&lt;span&gt;MII Polling Interval (ms): 100
&lt;&#x2F;span&gt;&lt;span&gt;
&lt;&#x2F;span&gt;&lt;span&gt;802.3ad info
&lt;&#x2F;span&gt;&lt;span&gt;LACP active: on
&lt;&#x2F;span&gt;&lt;span&gt;LACP rate: fast
&lt;&#x2F;span&gt;&lt;span&gt;Active Aggregator Info:
&lt;&#x2F;span&gt;&lt;span&gt;    Aggregator ID: 1
&lt;&#x2F;span&gt;&lt;span&gt;    Number of ports: 2
&lt;&#x2F;span&gt;&lt;span&gt;    Partner Mac Address: 9c:05:d6:63:f9:bb
&lt;&#x2F;span&gt;&lt;span&gt;
&lt;&#x2F;span&gt;&lt;span&gt;Slave Interface: ge0
&lt;&#x2F;span&gt;&lt;span&gt;MII Status: up
&lt;&#x2F;span&gt;&lt;span&gt;Speed: 1000 Mbps
&lt;&#x2F;span&gt;&lt;span&gt;Duplex: full
&lt;&#x2F;span&gt;&lt;span&gt;
&lt;&#x2F;span&gt;&lt;span&gt;Slave Interface: ge1
&lt;&#x2F;span&gt;&lt;span&gt;MII Status: up
&lt;&#x2F;span&gt;&lt;span&gt;Speed: 1000 Mbps
&lt;&#x2F;span&gt;&lt;span&gt;Duplex: full
&lt;&#x2F;span&gt;&lt;&#x2F;code&gt;&lt;&#x2F;pre&gt;
&lt;p&gt;Key things to verify:&lt;&#x2F;p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;code&gt;Bonding Mode: IEEE 802.3ad Dynamic link aggregation&lt;&#x2F;code&gt;&lt;&#x2F;li&gt;
&lt;li&gt;&lt;code&gt;Number of ports: 2&lt;&#x2F;code&gt;&lt;&#x2F;li&gt;
&lt;li&gt;&lt;code&gt;Partner Mac Address&lt;&#x2F;code&gt; shows your switch’s MAC (not 00:00:00:00:00:00)&lt;&#x2F;li&gt;
&lt;li&gt;Both ge0 and ge1 show &lt;code&gt;MII Status: up&lt;&#x2F;code&gt; and &lt;code&gt;Speed: 1000 Mbps&lt;&#x2F;code&gt;&lt;&#x2F;li&gt;
&lt;&#x2F;ul&gt;
&lt;h2 id=&quot;web-ui-api&quot;&gt;Web UI &#x2F; API&lt;&#x2F;h2&gt;
&lt;p&gt;The bonding configuration is also accessible via:&lt;&#x2F;p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;Web UI&lt;&#x2F;strong&gt;: &lt;code&gt;http:&#x2F;&#x2F;turingpi&lt;&#x2F;code&gt; → System &amp;gt; Network Settings&lt;&#x2F;li&gt;
&lt;li&gt;&lt;strong&gt;API&lt;&#x2F;strong&gt;: &lt;code&gt;GET &#x2F;api&#x2F;network_config&lt;&#x2F;code&gt; returns current status&lt;&#x2F;li&gt;
&lt;li&gt;&lt;strong&gt;API&lt;&#x2F;strong&gt;: &lt;code&gt;POST &#x2F;api&#x2F;network_config?enabled=1&amp;amp;mode=802.3ad&amp;amp;apply=1&lt;&#x2F;code&gt; to configure&lt;&#x2F;li&gt;
&lt;&#x2F;ul&gt;
&lt;h2 id=&quot;troubleshooting&quot;&gt;Troubleshooting&lt;&#x2F;h2&gt;
&lt;h3 id=&quot;lacp-not-negotiating-partner-mac-is-00-00-00-00-00-00&quot;&gt;LACP Not Negotiating (Partner MAC is 00:00:00:00:00:00)&lt;&#x2F;h3&gt;
&lt;ul&gt;
&lt;li&gt;Verify LACP is enabled on the switch ports&lt;&#x2F;li&gt;
&lt;li&gt;Check that both cables are connected&lt;&#x2F;li&gt;
&lt;li&gt;Wait 30+ seconds for negotiation&lt;&#x2F;li&gt;
&lt;&#x2F;ul&gt;
&lt;h3 id=&quot;network-unreachable-after-mode-change&quot;&gt;Network Unreachable After Mode Change&lt;&#x2F;h3&gt;
&lt;p&gt;If you change to 802.3ad but the switch isn’t configured for LACP:&lt;&#x2F;p&gt;
&lt;ol&gt;
&lt;li&gt;Temporarily disable LACP on the switch&lt;&#x2F;li&gt;
&lt;li&gt;Access BMC and change mode to &lt;code&gt;active-backup&lt;&#x2F;code&gt;&lt;&#x2F;li&gt;
&lt;li&gt;Then configure switch for LACP and switch back to &lt;code&gt;802.3ad&lt;&#x2F;code&gt;&lt;&#x2F;li&gt;
&lt;&#x2F;ol&gt;
&lt;h3 id=&quot;bonding-not-starting&quot;&gt;Bonding Not Starting&lt;&#x2F;h3&gt;
&lt;ul&gt;
&lt;li&gt;Verify &lt;code&gt;&#x2F;etc&#x2F;bonding.enabled&lt;&#x2F;code&gt; exists&lt;&#x2F;li&gt;
&lt;li&gt;Check &lt;code&gt;&#x2F;etc&#x2F;bonding.conf&lt;&#x2F;code&gt; contains a valid mode&lt;&#x2F;li&gt;
&lt;li&gt;Run &lt;code&gt;&#x2F;etc&#x2F;init.d&#x2F;S45bonding status&lt;&#x2F;code&gt; to check bond state&lt;&#x2F;li&gt;
&lt;&#x2F;ul&gt;
&lt;h2 id=&quot;flashing-to-internal-storage&quot;&gt;Flashing to Internal Storage&lt;&#x2F;h2&gt;
&lt;p&gt;To flash the custom firmware to internal storage without using an SD card:&lt;&#x2F;p&gt;
&lt;ol&gt;
&lt;li&gt;Boot from SD card with working firmware&lt;&#x2F;li&gt;
&lt;li&gt;Attach UBI device: &lt;code&gt;ubiattach -m 1 -d 0&lt;&#x2F;code&gt;&lt;&#x2F;li&gt;
&lt;li&gt;Copy firmware: &lt;code&gt;scp firmware.tpu root@turingpi:&#x2F;tmp&#x2F;&lt;&#x2F;code&gt;&lt;&#x2F;li&gt;
&lt;li&gt;Write to UBI: &lt;code&gt;ubiupdatevol &#x2F;dev&#x2F;ubi0_1 &#x2F;tmp&#x2F;firmware.tpu&lt;&#x2F;code&gt;&lt;&#x2F;li&gt;
&lt;li&gt;Reboot without SD card&lt;&#x2F;li&gt;
&lt;&#x2F;ol&gt;
&lt;h2 id=&quot;links&quot;&gt;Links&lt;&#x2F;h2&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href=&quot;https:&#x2F;&#x2F;github.com&#x2F;turing-machines&#x2F;BMC-Firmware&#x2F;pull&#x2F;254&quot;&gt;Pull Request&lt;&#x2F;a&gt;&lt;&#x2F;li&gt;
&lt;li&gt;&lt;a href=&quot;https:&#x2F;&#x2F;www.kernel.org&#x2F;doc&#x2F;Documentation&#x2F;networking&#x2F;bonding.txt&quot;&gt;Linux Bonding Documentation&lt;&#x2F;a&gt;&lt;&#x2F;li&gt;
&lt;li&gt;&lt;a href=&quot;https:&#x2F;&#x2F;standards.ieee.org&#x2F;ieee&#x2F;802.3ad&#x2F;1587&#x2F;&quot;&gt;802.3ad LACP Standard&lt;&#x2F;a&gt;&lt;&#x2F;li&gt;
&lt;&#x2F;ul&gt;
</description>
      </item>
      <item>
          <title>Tips &amp; Tricks of the trade: sanitize secrets from dirty commits</title>
          <pubDate>Wed, 15 Jan 2025 19:09:33 -0700</pubDate>
          <author>Unknown</author>
          <link>https://kcirtapfromspace.github.io/kcirtap-blog/posts/tips-sanitize-secrets/</link>
          <guid>https://kcirtapfromspace.github.io/kcirtap-blog/posts/tips-sanitize-secrets/</guid>
          <description xml:base="https://kcirtapfromspace.github.io/kcirtap-blog/posts/tips-sanitize-secrets/">&lt;h1 id=&quot;tips-tricks-of-the-trade-sanitize-secrets-from-dirty-commits&quot;&gt;Tips &amp;amp; Tricks of the Trade: sanitize secrets from dirty commits&lt;&#x2F;h1&gt;
&lt;p&gt;Scan a repo for senstive keys or secretes. Did you accidently commit a api key.&lt;&#x2F;p&gt;
&lt;p&gt;Tools like &lt;a href=&quot;https:&#x2F;&#x2F;github.com&#x2F;gitleaks&#x2F;gitleaks&#x2F;tree&#x2F;master&quot;&gt;gitleaks&lt;&#x2F;a&gt; or &lt;a href=&quot;https:&#x2F;&#x2F;github.com&#x2F;trufflesecurity&#x2F;trufflehog&quot;&gt;trufflehog&lt;&#x2F;a&gt; can be very effective aid in cleaning up dirty commits.&lt;&#x2F;p&gt;
&lt;h2 id=&quot;gitleaks&quot;&gt;GitLeaks&lt;&#x2F;h2&gt;
&lt;p&gt;Lets cover gitleaks to remove an api key from history. Go install&lt;&#x2F;p&gt;
&lt;pre style=&quot;background-color:#2b303b;color:#c0c5ce;&quot;&gt;&lt;code&gt;&lt;span&gt;brew install gitleaks
&lt;&#x2F;span&gt;&lt;&#x2F;code&gt;&lt;&#x2F;pre&gt;
&lt;p&gt;Setup a config file &lt;code&gt;.gitleaks.toml&lt;&#x2F;code&gt; in the root dir of the repo. This example excludes some rules from a git directory that is being used as a static asset store.&lt;&#x2F;p&gt;
&lt;pre data-lang=&quot;.gitleaks.toml&quot; style=&quot;background-color:#2b303b;color:#c0c5ce;&quot; class=&quot;language-.gitleaks.toml &quot;&gt;&lt;code class=&quot;language-.gitleaks.toml&quot; data-lang=&quot;.gitleaks.toml&quot;&gt;&lt;span&gt;title = &amp;quot;Custom Gitleaks Configuration&amp;quot;
&lt;&#x2F;span&gt;&lt;span&gt;
&lt;&#x2F;span&gt;&lt;span&gt;[extend]
&lt;&#x2F;span&gt;&lt;span&gt;
&lt;&#x2F;span&gt;&lt;span&gt;useDefault = true
&lt;&#x2F;span&gt;&lt;span&gt;
&lt;&#x2F;span&gt;&lt;span&gt;tags = [&amp;quot;data_dir&amp;quot;]
&lt;&#x2F;span&gt;&lt;span&gt;
&lt;&#x2F;span&gt;&lt;span&gt;[[rules]]
&lt;&#x2F;span&gt;&lt;span&gt;
&lt;&#x2F;span&gt;&lt;span&gt;id = &amp;quot;generic-api-key&amp;quot;
&lt;&#x2F;span&gt;&lt;span&gt;
&lt;&#x2F;span&gt;&lt;span&gt;[[rules.allowlists]]
&lt;&#x2F;span&gt;&lt;span&gt;
&lt;&#x2F;span&gt;&lt;span&gt;paths = [
&lt;&#x2F;span&gt;&lt;span&gt;
&lt;&#x2F;span&gt;&lt;span&gt;&amp;#39;&amp;#39;&amp;#39;^data&#x2F;raw-events-.*\.(json|parquet)$&amp;#39;&amp;#39;&amp;#39;,
&lt;&#x2F;span&gt;&lt;span&gt;]
&lt;&#x2F;span&gt;&lt;span&gt;
&lt;&#x2F;span&gt;&lt;span&gt;[[rules]]
&lt;&#x2F;span&gt;&lt;span&gt;
&lt;&#x2F;span&gt;&lt;span&gt;id = &amp;quot;github-app-token&amp;quot;
&lt;&#x2F;span&gt;&lt;span&gt;
&lt;&#x2F;span&gt;&lt;span&gt;[[rules.allowlists]]
&lt;&#x2F;span&gt;&lt;span&gt;
&lt;&#x2F;span&gt;&lt;span&gt;paths = [
&lt;&#x2F;span&gt;&lt;span&gt;
&lt;&#x2F;span&gt;&lt;span&gt;&amp;#39;&amp;#39;&amp;#39;^data&#x2F;raw-events-.*\.(json|parquet)$&amp;#39;&amp;#39;&amp;#39;
&lt;&#x2F;span&gt;&lt;span&gt;]
&lt;&#x2F;span&gt;&lt;&#x2F;code&gt;&lt;&#x2F;pre&gt;
&lt;p&gt;Run the CLI command or better yet setup a pre-commit to always check.&lt;&#x2F;p&gt;
&lt;pre data-lang=&quot;shell&quot; style=&quot;background-color:#2b303b;color:#c0c5ce;&quot; class=&quot;language-shell &quot;&gt;&lt;code class=&quot;language-shell&quot; data-lang=&quot;shell&quot;&gt;&lt;span&gt;❯ gitleaks detect --source . --report-format json --report-path gitleaks-report.json
&lt;&#x2F;span&gt;&lt;span&gt;
&lt;&#x2F;span&gt;&lt;span&gt;    ○
&lt;&#x2F;span&gt;&lt;span&gt;    │╲
&lt;&#x2F;span&gt;&lt;span&gt;    │ ○
&lt;&#x2F;span&gt;&lt;span&gt;    ○ ░
&lt;&#x2F;span&gt;&lt;span&gt;    ░    gitleaks
&lt;&#x2F;span&gt;&lt;span&gt;
&lt;&#x2F;span&gt;&lt;span&gt;2:24PM INF 373 commits scanned.
&lt;&#x2F;span&gt;&lt;span&gt;2:24PM INF scanned ~4469777469 bytes (4.47 GB) in 1m7.7s
&lt;&#x2F;span&gt;&lt;span&gt;2:24PM WRN leaks found: 6
&lt;&#x2F;span&gt;&lt;&#x2F;code&gt;&lt;&#x2F;pre&gt;
&lt;h2 id=&quot;resolve-the-leaks&quot;&gt;Resolve the leaks&lt;&#x2F;h2&gt;
&lt;p&gt;Github has an &lt;a href=&quot;https:&#x2F;&#x2F;docs.github.com&#x2F;en&#x2F;authentication&#x2F;keeping-your-account-and-data-secure&#x2F;removing-sensitive-data-from-a-repository&quot;&gt;excellent guide&lt;&#x2F;a&gt; using &lt;code&gt;git filter-repo&lt;&#x2F;code&gt; . The tldr is you need to clone a fresh repo and use a file outside of that repo to replace text in side commits&lt;&#x2F;p&gt;
&lt;pre data-lang=&quot;shell&quot; style=&quot;background-color:#2b303b;color:#c0c5ce;&quot; class=&quot;language-shell &quot;&gt;&lt;code class=&quot;language-shell&quot; data-lang=&quot;shell&quot;&gt;&lt;span&gt;❯ git filter-repo --sensitive-data-removal --replace-text ..&#x2F;replacements.txt
&lt;&#x2F;span&gt;&lt;&#x2F;code&gt;&lt;&#x2F;pre&gt;
&lt;p&gt;You can check the changes with grep&lt;&#x2F;p&gt;
&lt;pre style=&quot;background-color:#2b303b;color:#c0c5ce;&quot;&gt;&lt;code&gt;&lt;span&gt;❯ grep -c &amp;#39;^refs&#x2F;pull&#x2F;.*&#x2F;head$&amp;#39; .git&#x2F;filter-repo&#x2F;changed-refs
&lt;&#x2F;span&gt;&lt;span&gt;11
&lt;&#x2F;span&gt;&lt;span&gt;
&lt;&#x2F;span&gt;&lt;span&gt;❯ grep &amp;#39;^refs&#x2F;pull&#x2F;.*&#x2F;head$&amp;#39; .git&#x2F;filter-repo&#x2F;changed-refs
&lt;&#x2F;span&gt;&lt;span&gt;refs&#x2F;pull&#x2F;123&#x2F;head
&lt;&#x2F;span&gt;&lt;span&gt;refs&#x2F;pull&#x2F;37&#x2F;head
&lt;&#x2F;span&gt;&lt;span&gt;refs&#x2F;pull&#x2F;372&#x2F;head
&lt;&#x2F;span&gt;&lt;span&gt;refs&#x2F;pull&#x2F;379&#x2F;head
&lt;&#x2F;span&gt;&lt;span&gt;refs&#x2F;pull&#x2F;42&#x2F;head
&lt;&#x2F;span&gt;&lt;span&gt;refs&#x2F;pull&#x2F;433&#x2F;head
&lt;&#x2F;span&gt;&lt;span&gt;refs&#x2F;pull&#x2F;48&#x2F;head
&lt;&#x2F;span&gt;&lt;span&gt;refs&#x2F;pull&#x2F;57&#x2F;head
&lt;&#x2F;span&gt;&lt;span&gt;refs&#x2F;pull&#x2F;72&#x2F;head
&lt;&#x2F;span&gt;&lt;span&gt;refs&#x2F;pull&#x2F;73&#x2F;head
&lt;&#x2F;span&gt;&lt;span&gt;refs&#x2F;pull&#x2F;76&#x2F;head
&lt;&#x2F;span&gt;&lt;span&gt;# {{title}}
&lt;&#x2F;span&gt;&lt;span&gt;{{date}}
&lt;&#x2F;span&gt;&lt;span&gt;Status: #Blog 
&lt;&#x2F;span&gt;&lt;span&gt;Category: 
&lt;&#x2F;span&gt;&lt;span&gt;Volume: 
&lt;&#x2F;span&gt;&lt;span&gt;Difficulty: 
&lt;&#x2F;span&gt;&lt;span&gt;KW: 
&lt;&#x2F;span&gt;&lt;span&gt;## Should I Write This?
&lt;&#x2F;span&gt;&lt;span&gt;1. How does {{title}} fit into my deal? What offer or funnel is it promoting?
&lt;&#x2F;span&gt;&lt;span&gt;
&lt;&#x2F;span&gt;&lt;span&gt;2. What&amp;#39;s the key takeaway, action or CTA?
&lt;&#x2F;span&gt;&lt;span&gt;
&lt;&#x2F;span&gt;&lt;span&gt;## Content Plan
&lt;&#x2F;span&gt;&lt;span&gt;
&lt;&#x2F;span&gt;&lt;span&gt;### Intro - What Makes It Interesting?
&lt;&#x2F;span&gt;&lt;span&gt;- Data Point or Fact
&lt;&#x2F;span&gt;&lt;span&gt;- Hook
&lt;&#x2F;span&gt;&lt;span&gt;- Anecdote
&lt;&#x2F;span&gt;&lt;span&gt;- Personal Story
&lt;&#x2F;span&gt;&lt;span&gt;- Examples
&lt;&#x2F;span&gt;&lt;span&gt;- Screenshots
&lt;&#x2F;span&gt;&lt;span&gt;- Links
&lt;&#x2F;span&gt;&lt;span&gt;- Quotes&#x2F;Interview
&lt;&#x2F;span&gt;&lt;span&gt;- Imagery
&lt;&#x2F;span&gt;&lt;span&gt;- Embedded demo
&lt;&#x2F;span&gt;&lt;span&gt;- Real Life application
&lt;&#x2F;span&gt;&lt;span&gt;- Connection to something timely or well known
&lt;&#x2F;span&gt;&lt;span&gt;
&lt;&#x2F;span&gt;&lt;span&gt;### Main Content
&lt;&#x2F;span&gt;&lt;span&gt;
&lt;&#x2F;span&gt;&lt;span&gt;#### What&amp;#39;s my point of view on {{title}}? Do I have a background in {{title}}? Do I want to exapnd on an old idea?
&lt;&#x2F;span&gt;&lt;span&gt;
&lt;&#x2F;span&gt;&lt;span&gt;#### What Questions do I get asked about this or would I ask about {{title}}
&lt;&#x2F;span&gt;&lt;span&gt;
&lt;&#x2F;span&gt;&lt;span&gt;#### Problems
&lt;&#x2F;span&gt;&lt;span&gt;
&lt;&#x2F;span&gt;&lt;span&gt;#### Solutions
&lt;&#x2F;span&gt;&lt;span&gt;
&lt;&#x2F;span&gt;&lt;span&gt;#### What do I want them to know, feel or do?
&lt;&#x2F;span&gt;&lt;span&gt;- Core Message
&lt;&#x2F;span&gt;&lt;span&gt;- CTA
&lt;&#x2F;span&gt;&lt;span&gt;
&lt;&#x2F;span&gt;&lt;span&gt;
&lt;&#x2F;span&gt;&lt;span&gt;## Blog
&lt;&#x2F;span&gt;&lt;span&gt;# {{title}}
&lt;&#x2F;span&gt;&lt;span&gt;❯ git show refs&#x2F;pull&#x2F;37&#x2F;head
&lt;&#x2F;span&gt;&lt;&#x2F;code&gt;&lt;&#x2F;pre&gt;
&lt;p&gt;Next setup pre-commit to prevent this in the future.&lt;&#x2F;p&gt;
</description>
      </item>
      <item>
          <title>Hello, World!</title>
          <pubDate>Thu, 09 Jan 2025 00:00:00 +0000</pubDate>
          <author>Unknown</author>
          <link>https://kcirtapfromspace.github.io/kcirtap-blog/posts/hello-world/</link>
          <guid>https://kcirtapfromspace.github.io/kcirtap-blog/posts/hello-world/</guid>
          <description xml:base="https://kcirtapfromspace.github.io/kcirtap-blog/posts/hello-world/">&lt;p&gt;Welcome to the new blog! This site has been rebuilt from the ground up using &lt;a href=&quot;https:&#x2F;&#x2F;www.getzola.org&#x2F;&quot;&gt;Zola&lt;&#x2F;a&gt;, a blazing-fast static site generator written in Rust.&lt;&#x2F;p&gt;
&lt;h2 id=&quot;why-zola&quot;&gt;Why Zola?&lt;&#x2F;h2&gt;
&lt;p&gt;After using Hugo for a while, I wanted something that:&lt;&#x2F;p&gt;
&lt;ol&gt;
&lt;li&gt;&lt;strong&gt;Stays fast&lt;&#x2F;strong&gt; - Zola is incredibly quick at building sites&lt;&#x2F;li&gt;
&lt;li&gt;&lt;strong&gt;Has simpler templating&lt;&#x2F;strong&gt; - Tera templates feel more intuitive&lt;&#x2F;li&gt;
&lt;li&gt;&lt;strong&gt;Is written in Rust&lt;&#x2F;strong&gt; - Because why not?&lt;&#x2F;li&gt;
&lt;li&gt;&lt;strong&gt;Works well with Obsidian&lt;&#x2F;strong&gt; - My notes become blog posts seamlessly&lt;&#x2F;li&gt;
&lt;&#x2F;ol&gt;
&lt;h2 id=&quot;the-new-publishing-workflow&quot;&gt;The New Publishing Workflow&lt;&#x2F;h2&gt;
&lt;p&gt;The content for this blog now lives in my Obsidian vault. When I push changes to the vault repository, a GitHub Action automatically triggers a rebuild of this site. Here’s how it works:&lt;&#x2F;p&gt;
&lt;pre style=&quot;background-color:#2b303b;color:#c0c5ce;&quot;&gt;&lt;code&gt;&lt;span&gt;Obsidian Vault -&amp;gt; GitHub Push -&amp;gt; Blog Rebuild -&amp;gt; GitHub Pages
&lt;&#x2F;span&gt;&lt;&#x2F;code&gt;&lt;&#x2F;pre&gt;
&lt;p&gt;No more fighting with Hugo’s markdown quirks. Just write in Obsidian, push, and publish.&lt;&#x2F;p&gt;
&lt;h2 id=&quot;what-s-next&quot;&gt;What’s Next?&lt;&#x2F;h2&gt;
&lt;p&gt;I’ll be migrating some older content and writing new posts about:&lt;&#x2F;p&gt;
&lt;ul&gt;
&lt;li&gt;Infrastructure and DevOps&lt;&#x2F;li&gt;
&lt;li&gt;Rust programming&lt;&#x2F;li&gt;
&lt;li&gt;Developer tooling&lt;&#x2F;li&gt;
&lt;li&gt;Random technical adventures&lt;&#x2F;li&gt;
&lt;&#x2F;ul&gt;
&lt;p&gt;Stay tuned, and thanks for reading!&lt;&#x2F;p&gt;
&lt;hr &#x2F;&gt;
&lt;p&gt;&lt;em&gt;This post was written in Obsidian and published automatically via GitHub Actions.&lt;&#x2F;em&gt;&lt;&#x2F;p&gt;
</description>
      </item>
      <item>
          <title>performance testing with k6</title>
          <pubDate>Thu, 06 Jun 2024 19:42:52 -0600</pubDate>
          <author>Unknown</author>
          <link>https://kcirtapfromspace.github.io/kcirtap-blog/posts/performance-testing-with-k6/</link>
          <guid>https://kcirtapfromspace.github.io/kcirtap-blog/posts/performance-testing-with-k6/</guid>
          <description xml:base="https://kcirtapfromspace.github.io/kcirtap-blog/posts/performance-testing-with-k6/">&lt;p&gt;Performance testing with K6&lt;&#x2F;p&gt;
</description>
      </item>
      <item>
          <title>wingbits</title>
          <pubDate>Thu, 06 Jun 2024 19:40:24 -0600</pubDate>
          <author>Unknown</author>
          <link>https://kcirtapfromspace.github.io/kcirtap-blog/posts/wingbits/</link>
          <guid>https://kcirtapfromspace.github.io/kcirtap-blog/posts/wingbits/</guid>
          <description xml:base="https://kcirtapfromspace.github.io/kcirtap-blog/posts/wingbits/">&lt;p&gt;Flight tracking&lt;&#x2F;p&gt;
</description>
      </item>
      <item>
          <title>caching-with-nodejs</title>
          <pubDate>Mon, 13 May 2024 16:18:43 -0600</pubDate>
          <author>Unknown</author>
          <link>https://kcirtapfromspace.github.io/kcirtap-blog/posts/caching-with-nodejs/</link>
          <guid>https://kcirtapfromspace.github.io/kcirtap-blog/posts/caching-with-nodejs/</guid>
          <description xml:base="https://kcirtapfromspace.github.io/kcirtap-blog/posts/caching-with-nodejs/"></description>
      </item>
      <item>
          <title>file sync</title>
          <pubDate>Mon, 13 May 2024 11:35:10 -0600</pubDate>
          <author>Unknown</author>
          <link>https://kcirtapfromspace.github.io/kcirtap-blog/posts/tips-file-sync/</link>
          <guid>https://kcirtapfromspace.github.io/kcirtap-blog/posts/tips-file-sync/</guid>
          <description xml:base="https://kcirtapfromspace.github.io/kcirtap-blog/posts/tips-file-sync/">&lt;h1 id=&quot;tips-tricks-of-the-trade-file-sync&quot;&gt;Tips &amp;amp; Tricks of the Trade: file sync&lt;&#x2F;h1&gt;
&lt;p&gt;There is the ole trusty rsync&lt;&#x2F;p&gt;
&lt;pre data-lang=&quot;sh&quot; style=&quot;background-color:#2b303b;color:#c0c5ce;&quot; class=&quot;language-sh &quot;&gt;&lt;code class=&quot;language-sh&quot; data-lang=&quot;sh&quot;&gt;&lt;span style=&quot;color:#bf616a;&quot;&gt;$&lt;&#x2F;span&gt;&lt;span&gt; rsync&lt;&#x2F;span&gt;&lt;span style=&quot;color:#bf616a;&quot;&gt; -avz ~&lt;&#x2F;span&gt;&lt;span&gt;&#x2F;compressed_file.tar.gz remoteuser@otherhost:&#x2F;share&#x2F;
&lt;&#x2F;span&gt;&lt;span&gt;
&lt;&#x2F;span&gt;&lt;span style=&quot;color:#bf616a;&quot;&gt;$&lt;&#x2F;span&gt;&lt;span&gt; ssh remoteuser@otherhost &amp;quot;&lt;&#x2F;span&gt;&lt;span style=&quot;color:#a3be8c;&quot;&gt;tar zxf &#x2F;share&#x2F;compressed_file.tar.gz -C &#x2F;share&#x2F;uncompressed&#x2F;&lt;&#x2F;span&gt;&lt;span&gt;&amp;quot;
&lt;&#x2F;span&gt;&lt;&#x2F;code&gt;&lt;&#x2F;pre&gt;
&lt;p&gt;Transfer and Extract in One Step&lt;&#x2F;p&gt;
&lt;pre style=&quot;background-color:#2b303b;color:#c0c5ce;&quot;&gt;&lt;code&gt;&lt;span&gt;rsync -avz -e ssh ~&#x2F;compressed_file.tar.gz remoteuser@otherhost:&#x2F;dev&#x2F;stdout | ssh remoteuser@otherhost &amp;quot;tar zxf - -C &#x2F;share&#x2F;uncompressed&#x2F;&amp;quot;
&lt;&#x2F;span&gt;&lt;span&gt;
&lt;&#x2F;span&gt;&lt;&#x2F;code&gt;&lt;&#x2F;pre&gt;
&lt;p&gt;&lt;strong&gt;Use &lt;code&gt;mbuffer&lt;&#x2F;code&gt;&lt;&#x2F;strong&gt; when dealing with large data transfers, especially over unstable networks, where buffering can help ensure smoother and more reliable data flow.&lt;&#x2F;p&gt;
&lt;pre data-lang=&quot;shell&quot; style=&quot;background-color:#2b303b;color:#c0c5ce;&quot; class=&quot;language-shell &quot;&gt;&lt;code class=&quot;language-shell&quot; data-lang=&quot;shell&quot;&gt;&lt;span&gt;$ mbuffer -s 1K -m 512 -i ~&#x2F;compressed_file.tar.gz | ssh remoteuser@otherhost &amp;quot;tar zxf - -C &#x2F;share&#x2F;uncompressed&#x2F;&amp;quot;
&lt;&#x2F;span&gt;&lt;&#x2F;code&gt;&lt;&#x2F;pre&gt;
&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;&lt;code&gt;mbuffer -s 1K -m 512 -i &quot;$SOURCE_FILE&quot;&lt;&#x2F;code&gt;&lt;&#x2F;strong&gt;
&lt;ul&gt;
&lt;li&gt;&lt;code&gt;-s 1K&lt;&#x2F;code&gt;: Sets the buffer size to 1 Kilobyte.&lt;&#x2F;li&gt;
&lt;li&gt;&lt;code&gt;-m 512&lt;&#x2F;code&gt;: Allocates 512 Megabytes of memory for buffering.&lt;&#x2F;li&gt;
&lt;li&gt;&lt;code&gt;-i &quot;$SOURCE_FILE&quot;&lt;&#x2F;code&gt;: Specifies the input file to read from.&lt;&#x2F;li&gt;
&lt;&#x2F;ul&gt;
&lt;&#x2F;li&gt;
&lt;li&gt;&lt;strong&gt;&lt;code&gt;|&lt;&#x2F;code&gt;&lt;&#x2F;strong&gt;: Pipes the output of &lt;code&gt;mbuffer&lt;&#x2F;code&gt; to the next command.&lt;&#x2F;li&gt;
&lt;li&gt;&lt;strong&gt;&lt;code&gt;ssh remoteuser@otherhost &quot;tar zxf - -C &#x2F;share&#x2F;uncompressed&#x2F;&quot;&lt;&#x2F;code&gt;&lt;&#x2F;strong&gt;
&lt;ul&gt;
&lt;li&gt;Connects to the remote host &lt;code&gt;otherhost&lt;&#x2F;code&gt; with the user &lt;code&gt;remoteuser&lt;&#x2F;code&gt;.&lt;&#x2F;li&gt;
&lt;li&gt;&lt;code&gt;&quot;tar zxf - -C &#x2F;share&#x2F;uncompressed&#x2F;&quot;&lt;&#x2F;code&gt;: Extracts the tarball received from standard input (&lt;code&gt;-&lt;&#x2F;code&gt;) into the &lt;code&gt;&#x2F;share&#x2F;uncompressed&#x2F;&lt;&#x2F;code&gt; directory on the remote host.&lt;&#x2F;li&gt;
&lt;&#x2F;ul&gt;
&lt;&#x2F;li&gt;
&lt;&#x2F;ul&gt;
&lt;p&gt;&lt;strong&gt;Use &lt;code&gt;pv&lt;&#x2F;code&gt;&lt;&#x2F;strong&gt; when you need a simple way to monitor the progress of data through a pipeline with minimal overhead.&lt;&#x2F;p&gt;
&lt;pre data-lang=&quot;shell&quot; style=&quot;background-color:#2b303b;color:#c0c5ce;&quot; class=&quot;language-shell &quot;&gt;&lt;code class=&quot;language-shell&quot; data-lang=&quot;shell&quot;&gt;&lt;span&gt;$ pv ~&#x2F;compressed_file.tar.gz | ssh admin@169.254.x.x &amp;quot;tar zxf - -C &#x2F;share&#x2F;uncompressed&#x2F;&amp;quot;
&lt;&#x2F;span&gt;&lt;&#x2F;code&gt;&lt;&#x2F;pre&gt;
&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;&lt;code&gt;pv &quot;$SOURCE_FILE&quot;&lt;&#x2F;code&gt;&lt;&#x2F;strong&gt;
&lt;ul&gt;
&lt;li&gt;&lt;code&gt;pv&lt;&#x2F;code&gt;: Monitors the progress of data through the pipe.&lt;&#x2F;li&gt;
&lt;li&gt;&lt;code&gt;&quot;$SOURCE_FILE&quot;&lt;&#x2F;code&gt;: Specifies the file to transfer.&lt;&#x2F;li&gt;
&lt;&#x2F;ul&gt;
&lt;&#x2F;li&gt;
&lt;li&gt;&lt;strong&gt;&lt;code&gt;|&lt;&#x2F;code&gt;&lt;&#x2F;strong&gt;: Pipes the output of &lt;code&gt;pv&lt;&#x2F;code&gt; to the next command.&lt;&#x2F;li&gt;
&lt;li&gt;&lt;strong&gt;&lt;code&gt;ssh admin@169.254.x.x &quot;tar zxf - -C &#x2F;share&#x2F;uncompressed&#x2F;&quot;&lt;&#x2F;code&gt;&lt;&#x2F;strong&gt;
&lt;ul&gt;
&lt;li&gt;Connects to the remote host with IP &lt;code&gt;169.254.x.x&lt;&#x2F;code&gt; using the user &lt;code&gt;admin&lt;&#x2F;code&gt;.&lt;&#x2F;li&gt;
&lt;li&gt;&lt;code&gt;&quot;tar zxf - -C &#x2F;share&#x2F;uncompressed&#x2F;&quot;&lt;&#x2F;code&gt;: Extracts the tarball received from standard input (&lt;code&gt;-&lt;&#x2F;code&gt;) into the &lt;code&gt;&#x2F;share&#x2F;uncompressed&#x2F;&lt;&#x2F;code&gt; directory on the remote host.&lt;&#x2F;li&gt;
&lt;&#x2F;ul&gt;
&lt;&#x2F;li&gt;
&lt;&#x2F;ul&gt;
</description>
      </item>
      <item>
          <title>raspberrypi-unifi-controller-ap</title>
          <pubDate>Sun, 11 Feb 2024 21:50:12 -0700</pubDate>
          <author>Unknown</author>
          <link>https://kcirtapfromspace.github.io/kcirtap-blog/posts/raspberrypi-unifi-controller-ap/</link>
          <guid>https://kcirtapfromspace.github.io/kcirtap-blog/posts/raspberrypi-unifi-controller-ap/</guid>
          <description xml:base="https://kcirtapfromspace.github.io/kcirtap-blog/posts/raspberrypi-unifi-controller-ap/">&lt;p&gt;I picked up an AP from Unifi, but I already have a firewalla and some switches. I originally set up the AP manually with the Wi-Fi AP. If that’s all you need to do it will run, I couldn’t figure out how to get VLANs configured for specific. Well, you need to use the Unifi controller. You can get this with a cloud key or a dream machine, but it gives you the remote console. Turns out you can make your own Unifi controller. Let’s do that with one of the Raspberry Pi’s that are lying around.&lt;&#x2F;p&gt;
&lt;p&gt;A quick google later and big shout out to (emmet)[https:&#x2F;&#x2F;pimylifeup.com&#x2F;rasberry-pi-unifi&#x2F;] for getting the guide below guide together. Setup goes superfast, its like 15 minutes and spend more time looking for a nail as you now need to reset the AP so it can be adopted by the controller.&lt;&#x2F;p&gt;
&lt;p&gt;Take a peek at this guy, Evan he explains all the configurations and gives you a starting point on whether or not you should change a setting (Unifi advance wi-fi settings) [https:&#x2F;&#x2F;evanmccann.net&#x2F;blog&#x2F;2021&#x2F;11&#x2F;unifi-advanced-wi-fi-settings]&lt;&#x2F;p&gt;
&lt;p&gt;Configure the VLAN ID and map them each to a Wi-Fi network and associate them with your ap.&lt;&#x2F;p&gt;
&lt;p&gt;Then you just need to find all the Wi-Fi devices and rest their connection to your segmented networks. Congrats! you can now cut off internet access to all the IoT &amp;amp; music things that lay about your house.&lt;&#x2F;p&gt;
&lt;pre style=&quot;background-color:#2b303b;color:#c0c5ce;&quot;&gt;&lt;code&gt;&lt;span&gt;# Installing the UniFi Controller on the Raspberry Pi
&lt;&#x2F;span&gt;&lt;span&gt;
&lt;&#x2F;span&gt;&lt;span&gt;
&lt;&#x2F;span&gt;&lt;span&gt;## Preparing your Raspberry Pi for the UniFi Controller
&lt;&#x2F;span&gt;&lt;span&gt;
&lt;&#x2F;span&gt;&lt;span&gt;```bash
&lt;&#x2F;span&gt;&lt;span&gt;sudo apt update
&lt;&#x2F;span&gt;&lt;span&gt;sudo apt upgrade
&lt;&#x2F;span&gt;&lt;span&gt;```
&lt;&#x2F;span&gt;&lt;span&gt;
&lt;&#x2F;span&gt;&lt;span&gt;### Adding Entropy using rng-tools
&lt;&#x2F;span&gt;&lt;span&gt;
&lt;&#x2F;span&gt;&lt;span&gt;**2.** To improve the startup speed of the UniFi controller software on our Raspberry Pi, we need to install `rng-tools`.
&lt;&#x2F;span&gt;&lt;span&gt;
&lt;&#x2F;span&gt;&lt;span&gt;We will utilize this package to ensure the Raspberry Pi has enough entropy for the random number generation that the UniFi software uses.
&lt;&#x2F;span&gt;&lt;span&gt;
&lt;&#x2F;span&gt;&lt;span&gt;```bash
&lt;&#x2F;span&gt;&lt;span&gt;sudo apt install rng-tools
&lt;&#x2F;span&gt;&lt;span&gt;```
&lt;&#x2F;span&gt;&lt;span&gt;
&lt;&#x2F;span&gt;&lt;span&gt;**3.** We now need to make a slight change to the rng-tools configuration.
&lt;&#x2F;span&gt;&lt;span&gt;
&lt;&#x2F;span&gt;&lt;span&gt;Begin editing the config file by running the following command.
&lt;&#x2F;span&gt;&lt;span&gt;
&lt;&#x2F;span&gt;&lt;span&gt;```bash
&lt;&#x2F;span&gt;&lt;span&gt;sudo nano &#x2F;etc&#x2F;default&#x2F;rng-tools-debian
&lt;&#x2F;span&gt;&lt;span&gt;```
&lt;&#x2F;span&gt;&lt;span&gt;
&lt;&#x2F;span&gt;&lt;span&gt;**4.** Within this file, find and uncomment the following line.
&lt;&#x2F;span&gt;&lt;span&gt;
&lt;&#x2F;span&gt;&lt;span&gt;**Find**
&lt;&#x2F;span&gt;&lt;span&gt;
&lt;&#x2F;span&gt;&lt;span&gt;```ini
&lt;&#x2F;span&gt;&lt;span&gt;#HRNGDEVICE=&#x2F;dev&#x2F;hwrng
&lt;&#x2F;span&gt;&lt;span&gt;```
&lt;&#x2F;span&gt;&lt;span&gt;
&lt;&#x2F;span&gt;&lt;span&gt;**Replace With**
&lt;&#x2F;span&gt;&lt;span&gt;
&lt;&#x2F;span&gt;&lt;span&gt;```ini
&lt;&#x2F;span&gt;&lt;span&gt;HRNGDEVICE=&#x2F;dev&#x2F;hwrng
&lt;&#x2F;span&gt;&lt;span&gt;```
&lt;&#x2F;span&gt;&lt;span&gt;
&lt;&#x2F;span&gt;&lt;span&gt;By uncommenting this line, we are adding to the amount of entropy (The amount of randomness) that the system has available.
&lt;&#x2F;span&gt;&lt;span&gt;
&lt;&#x2F;span&gt;&lt;span&gt;The Raspberry Pi features an integrated random number generator that we can utilize to increase the entropy pool.
&lt;&#x2F;span&gt;&lt;span&gt;
&lt;&#x2F;span&gt;&lt;span&gt;**5.** Once you have made the change, save the file by pressing CTRL + X, then Y, followed by ENTER.
&lt;&#x2F;span&gt;&lt;span&gt;
&lt;&#x2F;span&gt;&lt;span&gt;**6.** Finally, restart the `rng-tools` service by running the command below.
&lt;&#x2F;span&gt;&lt;span&gt;
&lt;&#x2F;span&gt;&lt;span&gt;```bash
&lt;&#x2F;span&gt;&lt;span&gt;sudo systemctl restart rng-tools
&lt;&#x2F;span&gt;&lt;span&gt;```
&lt;&#x2F;span&gt;&lt;span&gt;
&lt;&#x2F;span&gt;&lt;span&gt;Once the service has finished restarting, it should now be safe to proceed to the next section of this guide.
&lt;&#x2F;span&gt;&lt;span&gt;
&lt;&#x2F;span&gt;&lt;span&gt;### Installing an Older Release of LibSSL
&lt;&#x2F;span&gt;&lt;span&gt;
&lt;&#x2F;span&gt;&lt;span&gt;**7.** Due to the version of MongoDB that we will be utilizing, we will need to install an older release of LibSSL to our Raspberry Pi.
&lt;&#x2F;span&gt;&lt;span&gt;
&lt;&#x2F;span&gt;&lt;span&gt;In particular, we will be installing LibSSL 1.0. You can download this old release by using the following command.
&lt;&#x2F;span&gt;&lt;span&gt;
&lt;&#x2F;span&gt;&lt;span&gt;```bash
&lt;&#x2F;span&gt;&lt;span&gt;wget http:&#x2F;&#x2F;ports.ubuntu.com&#x2F;pool&#x2F;main&#x2F;o&#x2F;openssl&#x2F;libssl1.0.0_1.0.2g-1ubuntu4_arm64.deb -O libssl1.0.deb
&lt;&#x2F;span&gt;&lt;span&gt;```
&lt;&#x2F;span&gt;&lt;span&gt;
&lt;&#x2F;span&gt;&lt;span&gt;**8.** Once the package has been downloaded, all we need to do to install it is to run the command below.
&lt;&#x2F;span&gt;&lt;span&gt;
&lt;&#x2F;span&gt;&lt;span&gt;```bash
&lt;&#x2F;span&gt;&lt;span&gt;sudo dpkg -i libssl1.0.deb
&lt;&#x2F;span&gt;&lt;span&gt;```
&lt;&#x2F;span&gt;&lt;span&gt;
&lt;&#x2F;span&gt;&lt;span&gt;### Installing MongoDB to your Raspberry Pi for the UniFi Controller
&lt;&#x2F;span&gt;&lt;span&gt;
&lt;&#x2F;span&gt;&lt;span&gt;**9.** To use the UniFi Controller on your Raspberry Pi, we will need to install MongoDB.
&lt;&#x2F;span&gt;&lt;span&gt;
&lt;&#x2F;span&gt;&lt;span&gt;This is the database server that UniFi uses to store all of its data. As we can’t rely on the package repository, we will need to follow some additional steps.
&lt;&#x2F;span&gt;&lt;span&gt;
&lt;&#x2F;span&gt;&lt;span&gt;For this first step, we will download the latest available version of MongoDB 3.6 to our Pi. We are installing 3.6 as this is currently the only supported release for the UniFi Controller.
&lt;&#x2F;span&gt;&lt;span&gt;
&lt;&#x2F;span&gt;&lt;span&gt;```bash
&lt;&#x2F;span&gt;&lt;span&gt;wget https:&#x2F;&#x2F;repo.mongodb.org&#x2F;apt&#x2F;ubuntu&#x2F;dists&#x2F;xenial&#x2F;mongodb-org&#x2F;3.6&#x2F;multiverse&#x2F;binary-arm64&#x2F;mongodb-org-server_3.6.22_arm64.deb -O mongodb.deb
&lt;&#x2F;span&gt;&lt;span&gt;```
&lt;&#x2F;span&gt;&lt;span&gt;
&lt;&#x2F;span&gt;&lt;span&gt;**10.** Once the package is downloaded, install it by using the following command within the terminal.
&lt;&#x2F;span&gt;&lt;span&gt;
&lt;&#x2F;span&gt;&lt;span&gt;```bash
&lt;&#x2F;span&gt;&lt;span&gt;sudo dpkg -i mongodb.deb
&lt;&#x2F;span&gt;&lt;span&gt;```
&lt;&#x2F;span&gt;&lt;span&gt;
&lt;&#x2F;span&gt;&lt;span&gt;**11.** Now that we have installed the MongoDB server, set it to start when your Raspberry Pi boots using the command below.
&lt;&#x2F;span&gt;&lt;span&gt;
&lt;&#x2F;span&gt;&lt;span&gt;```bash
&lt;&#x2F;span&gt;&lt;span&gt;sudo systemctl enable mongod
&lt;&#x2F;span&gt;&lt;span&gt;```
&lt;&#x2F;span&gt;&lt;span&gt;
&lt;&#x2F;span&gt;&lt;span&gt;**12**. Finally, start MongoDB by running the command shown below in the terminal.
&lt;&#x2F;span&gt;&lt;span&gt;
&lt;&#x2F;span&gt;&lt;span&gt;This will start the server immediately, so we won’t have to wait till our device restarts.
&lt;&#x2F;span&gt;&lt;span&gt;
&lt;&#x2F;span&gt;&lt;span&gt;```bash
&lt;&#x2F;span&gt;&lt;span&gt;sudo systemctl start mongod
&lt;&#x2F;span&gt;&lt;span&gt;```
&lt;&#x2F;span&gt;&lt;span&gt;
&lt;&#x2F;span&gt;&lt;span&gt;## Installing the UniFi Controller to the Raspberry Pi
&lt;&#x2F;span&gt;&lt;span&gt;
&lt;&#x2F;span&gt;&lt;span&gt;**1.** Our first task is to add the UniFi repository to our sources list.
&lt;&#x2F;span&gt;&lt;span&gt;
&lt;&#x2F;span&gt;&lt;span&gt;We can achieve this by running the command below.
&lt;&#x2F;span&gt;&lt;span&gt;
&lt;&#x2F;span&gt;&lt;span&gt;```bash
&lt;&#x2F;span&gt;&lt;span&gt;echo &amp;#39;deb [arch=amd64 signed-by=&#x2F;usr&#x2F;share&#x2F;keyrings&#x2F;ubiquiti-archive-keyring.gpg] https:&#x2F;&#x2F;www.ui.com&#x2F;downloads&#x2F;unifi&#x2F;debian stable ubiquiti&amp;#39; | sudo tee &#x2F;etc&#x2F;apt&#x2F;sources.list.d&#x2F;100-ubnt-unifi.list &amp;gt;&#x2F;dev&#x2F;null
&lt;&#x2F;span&gt;&lt;span&gt;```
&lt;&#x2F;span&gt;&lt;span&gt;
&lt;&#x2F;span&gt;&lt;span&gt;You might notice that we are using “`amd64`” and not “`arm64`” or “`armhf`“. This is due to Ubiquiti not having their repository set up to mark “`arm64`” as compatible. However, it doesn’t hugely matter as, at the moment, it will still download files compatible with our Raspberry Pi.
&lt;&#x2F;span&gt;&lt;span&gt;
&lt;&#x2F;span&gt;&lt;span&gt;**2.** We now need to add the repositories GPG key by using the following command.
&lt;&#x2F;span&gt;&lt;span&gt;
&lt;&#x2F;span&gt;&lt;span&gt;```bash
&lt;&#x2F;span&gt;&lt;span&gt;curl https:&#x2F;&#x2F;dl.ui.com&#x2F;unifi&#x2F;unifi-repo.gpg | sudo tee &#x2F;usr&#x2F;share&#x2F;keyrings&#x2F;ubiquiti-archive-keyring.gpg &amp;gt;&#x2F;dev&#x2F;null
&lt;&#x2F;span&gt;&lt;span&gt;```
&lt;&#x2F;span&gt;&lt;span&gt;
&lt;&#x2F;span&gt;&lt;span&gt;The GPG key is what helps tell the package manager it is downloading the correct package.
&lt;&#x2F;span&gt;&lt;span&gt;
&lt;&#x2F;span&gt;&lt;span&gt;**3.** As we made changes to the repositories, we need to now update the package list by running the command below.
&lt;&#x2F;span&gt;&lt;span&gt;
&lt;&#x2F;span&gt;&lt;span&gt;```bash
&lt;&#x2F;span&gt;&lt;span&gt;sudo apt update
&lt;&#x2F;span&gt;&lt;span&gt;```
&lt;&#x2F;span&gt;&lt;span&gt;
&lt;&#x2F;span&gt;&lt;span&gt;**4.** Now finally, we can install version 17 of the OpenJDK runtime as well as the Unifi Controller software itself to our Raspberry Pi by running the following command.
&lt;&#x2F;span&gt;&lt;span&gt;
&lt;&#x2F;span&gt;&lt;span&gt;```bash
&lt;&#x2F;span&gt;&lt;span&gt;sudo apt install openjdk-17-jre-headless unifi
&lt;&#x2F;span&gt;&lt;span&gt;```
&lt;&#x2F;span&gt;&lt;span&gt;
&lt;&#x2F;span&gt;&lt;span&gt;Installing UniFi through this method will automatically set up a service. This service will automatically start the UniFi software at boot.
&lt;&#x2F;span&gt;&lt;span&gt;
&lt;&#x2F;span&gt;&lt;span&gt;Additionally, we are installing version 17 of the Java runtime environment as it is currently the only version supported by the UniFi controller.
&lt;&#x2F;span&gt;&lt;span&gt;
&lt;&#x2F;span&gt;&lt;span&gt;## First Boot of the UniFi Controller on your Raspberry Pi
&lt;&#x2F;span&gt;&lt;span&gt;
&lt;&#x2F;span&gt;&lt;span&gt;In this section, we are going to walk you through the initial configuration steps of the UniFi software.
&lt;&#x2F;span&gt;&lt;span&gt;
&lt;&#x2F;span&gt;&lt;span&gt;**1.** First, retrieve the [local IP address for your Raspberry Pi](https:&#x2F;&#x2F;pimylifeup.com&#x2F;raspberry-pi-ip-address&#x2F;).
&lt;&#x2F;span&gt;&lt;span&gt;
&lt;&#x2F;span&gt;&lt;span&gt;If you have terminal access to your Pi, you can use the following command.
&lt;&#x2F;span&gt;&lt;span&gt;
&lt;&#x2F;span&gt;&lt;span&gt;```bash
&lt;&#x2F;span&gt;&lt;span&gt;hostname -I
&lt;&#x2F;span&gt;&lt;span&gt;```
&lt;&#x2F;span&gt;&lt;span&gt;
&lt;&#x2F;span&gt;&lt;span&gt;**2.** With your Raspberry Pi’s IP address handy, go to the following web address in your favorite web browser.
&lt;&#x2F;span&gt;&lt;span&gt;
&lt;&#x2F;span&gt;&lt;span&gt;Ensure that you replace “`&amp;lt;IPADDRES&amp;gt;`” with the IP of your Raspberry Pi.
&lt;&#x2F;span&gt;&lt;span&gt;
&lt;&#x2F;span&gt;&lt;span&gt;```
&lt;&#x2F;span&gt;&lt;span&gt;https:&#x2F;&#x2F;&amp;lt;IPADDRESS&amp;gt;:8443
&lt;&#x2F;span&gt;&lt;span&gt;```
&lt;&#x2F;span&gt;&lt;span&gt;
&lt;&#x2F;span&gt;&lt;&#x2F;code&gt;&lt;&#x2F;pre&gt;
</description>
      </item>
      <item>
          <title>Why does my internet suck? Smart Queues</title>
          <pubDate>Sat, 10 Feb 2024 16:39:55 -0700</pubDate>
          <author>Unknown</author>
          <link>https://kcirtapfromspace.github.io/kcirtap-blog/posts/firewalla-smart-queue/</link>
          <guid>https://kcirtapfromspace.github.io/kcirtap-blog/posts/firewalla-smart-queue/</guid>
          <description xml:base="https://kcirtapfromspace.github.io/kcirtap-blog/posts/firewalla-smart-queue/">&lt;p&gt;Since the firewall has smart queue rules, it’s time to set them up. Firstly, let’s thank firewall people from getting around to adding domain based rules. Secondly, thumbs down on providers who don’t disclose their purposed based domains names.&lt;&#x2F;p&gt;
&lt;p&gt;First step, go find all the ports for the services you rely on. I vaguely recall that streaming services are all over UDP.  Because nobody wants to do all the TCP handshakes. So when setting up my rules, i’ll just allocate the UDP connections to the HIGH Priority smart queue.&lt;&#x2F;p&gt;
&lt;p&gt;Second step, create the rules you want. For my setup, I had to go through and create up individual smart rules as to give priority to these endpoints or specific ports. I did manage to set up some network segmentation, so the rules are only scoped to the primary vlan that my phones, laptops, and tablets use. Wish firewalla managed a list or enabled a community defined list, so I wouldn’t have to hunt these all down.&lt;&#x2F;p&gt;
&lt;h3 id=&quot;zoom&quot;&gt;Zoom&lt;&#x2F;h3&gt;
&lt;p&gt;(Zoom Firewall Ports)[https:&#x2F;&#x2F;support.zoom.com&#x2F;hc&#x2F;en&#x2F;article?id=zm_kb&amp;amp;sysparm_article=KB0060548]&lt;&#x2F;p&gt;
&lt;table&gt;&lt;thead&gt;&lt;tr&gt;&lt;th&gt;&lt;&#x2F;th&gt;&lt;th&gt;&lt;&#x2F;th&gt;&lt;th&gt;&lt;&#x2F;th&gt;&lt;th&gt;&lt;&#x2F;th&gt;&lt;&#x2F;tr&gt;&lt;&#x2F;thead&gt;&lt;tbody&gt;
&lt;tr&gt;&lt;td&gt;&lt;strong&gt;Protocol&lt;&#x2F;strong&gt;&lt;&#x2F;td&gt;&lt;td&gt;&lt;strong&gt;Ports&lt;&#x2F;strong&gt;&lt;&#x2F;td&gt;&lt;td&gt;&lt;strong&gt;Source&lt;&#x2F;strong&gt;&lt;&#x2F;td&gt;&lt;td&gt;&lt;strong&gt;Destination&lt;&#x2F;strong&gt;&lt;&#x2F;td&gt;&lt;&#x2F;tr&gt;
&lt;tr&gt;&lt;td&gt;TCP&lt;&#x2F;td&gt;&lt;td&gt;80,443&lt;&#x2F;td&gt;&lt;td&gt;All Zoom clients&lt;&#x2F;td&gt;&lt;td&gt;*.zoom.us&lt;&#x2F;td&gt;&lt;&#x2F;tr&gt;
&lt;tr&gt;&lt;td&gt;TCP&lt;&#x2F;td&gt;&lt;td&gt;443, 8801, 8802&lt;&#x2F;td&gt;&lt;td&gt;All Zoom clients&lt;&#x2F;td&gt;&lt;td&gt;&lt;&#x2F;td&gt;&lt;&#x2F;tr&gt;
&lt;tr&gt;&lt;td&gt;UDP&lt;&#x2F;td&gt;&lt;td&gt;3478, 3479, 8801 - 8810&lt;&#x2F;td&gt;&lt;td&gt;All Zoom clients&lt;&#x2F;td&gt;&lt;td&gt;&lt;&#x2F;td&gt;&lt;&#x2F;tr&gt;
&lt;&#x2F;tbody&gt;&lt;&#x2F;table&gt;
&lt;h3 id=&quot;google-meet&quot;&gt;Google Meet&lt;&#x2F;h3&gt;
&lt;p&gt;(Google Meet Firewall Ports)[https:&#x2F;&#x2F;support.google.com&#x2F;a&#x2F;answer&#x2F;1279090?hl=en]&lt;&#x2F;p&gt;
&lt;ul&gt;
&lt;li&gt;For audio and video, set up outbound UDP ports 3478 and 19302​–19309.
&lt;ul&gt;
&lt;li&gt;If you want to limit the number of Chrome WebRTC ports being used, use the ports specified at &lt;a href=&quot;https:&#x2F;&#x2F;support.google.com&#x2F;chrome&#x2F;a&#x2F;answer&#x2F;2657289?sjid=15543457588516875079-NA#web_rtc_udp_ports_max&quot;&gt;WebRTC UDP Ports&lt;&#x2F;a&gt;. &lt;&#x2F;li&gt;
&lt;li&gt;Or, you can limit those ports with your firewall.&lt;&#x2F;li&gt;
&lt;&#x2F;ul&gt;
&lt;&#x2F;li&gt;
&lt;li&gt;stream.meet.google.com&lt;&#x2F;li&gt;
&lt;li&gt;youtube.googleapis.com&lt;&#x2F;li&gt;
&lt;li&gt;www.youtube-nocookie.com&lt;&#x2F;li&gt;
&lt;li&gt;googlevideo.com&lt;&#x2F;li&gt;
&lt;li&gt;&lt;&#x2F;li&gt;
&lt;&#x2F;ul&gt;
&lt;h3 id=&quot;slack-huddles-aka-amazon-chime&quot;&gt;Slack Huddles (aka Amazon Chime)&lt;&#x2F;h3&gt;
&lt;ul&gt;
&lt;li&gt;Check that your network is set up to allow outbound traffic to UDP&#x2F;22466. Otherwise, huddles will use TCP&#x2F;443 for media transport (video and audio).&lt;&#x2F;li&gt;
&lt;li&gt;Allow outbound traffic to TCP&#x2F;443. This is required for huddles to function, even if outbound traffic to UDP&#x2F;22466 is allowed for media transport.&lt;&#x2F;li&gt;
&lt;li&gt;If you’d like, you can limit access to a specific IP range: 99.77.128.0&#x2F;18.
If your environment requires you to allow &lt;a href=&quot;https:&#x2F;&#x2F;my.slack.com&#x2F;help&#x2F;urls&quot;&gt;Slack’s required domains&lt;&#x2F;a&gt;, make sure you approve *.m.chime.aws. We aren’t able to provide a list of static domains, and suggest allowing by wildcard to avoid any network disruptions.&lt;&#x2F;li&gt;
&lt;&#x2F;ul&gt;
&lt;p&gt;^ That’s because they don’t manage the service…
https:&#x2F;&#x2F;cloud-native.slack.com&#x2F;help&#x2F;urls&lt;&#x2F;p&gt;
&lt;pre data-lang=&quot;json&quot; style=&quot;background-color:#2b303b;color:#c0c5ce;&quot; class=&quot;language-json &quot;&gt;&lt;code class=&quot;language-json&quot; data-lang=&quot;json&quot;&gt;&lt;span&gt;  [
&lt;&#x2F;span&gt;&lt;span&gt;    &amp;quot;&lt;&#x2F;span&gt;&lt;span style=&quot;color:#a3be8c;&quot;&gt;*.chime.aws&lt;&#x2F;span&gt;&lt;span&gt;&amp;quot;,
&lt;&#x2F;span&gt;&lt;span&gt;    &amp;quot;&lt;&#x2F;span&gt;&lt;span style=&quot;color:#a3be8c;&quot;&gt;a.slack-edge.com&lt;&#x2F;span&gt;&lt;span&gt;&amp;quot;,
&lt;&#x2F;span&gt;&lt;span&gt;    &amp;quot;&lt;&#x2F;span&gt;&lt;span style=&quot;color:#a3be8c;&quot;&gt;a.slack-imgs.com&lt;&#x2F;span&gt;&lt;span&gt;&amp;quot;,
&lt;&#x2F;span&gt;&lt;span&gt;    &amp;quot;&lt;&#x2F;span&gt;&lt;span style=&quot;color:#a3be8c;&quot;&gt;admin.slack.com&lt;&#x2F;span&gt;&lt;span&gt;&amp;quot;,
&lt;&#x2F;span&gt;&lt;span&gt;    &amp;quot;&lt;&#x2F;span&gt;&lt;span style=&quot;color:#a3be8c;&quot;&gt;alpha.slack.com&lt;&#x2F;span&gt;&lt;span&gt;&amp;quot;,
&lt;&#x2F;span&gt;&lt;span&gt;    &amp;quot;&lt;&#x2F;span&gt;&lt;span style=&quot;color:#a3be8c;&quot;&gt;api.slack.com&lt;&#x2F;span&gt;&lt;span&gt;&amp;quot;,
&lt;&#x2F;span&gt;&lt;span&gt;    &amp;quot;&lt;&#x2F;span&gt;&lt;span style=&quot;color:#a3be8c;&quot;&gt;app.slack.com&lt;&#x2F;span&gt;&lt;span&gt;&amp;quot;,
&lt;&#x2F;span&gt;&lt;span&gt;    &amp;quot;&lt;&#x2F;span&gt;&lt;span style=&quot;color:#a3be8c;&quot;&gt;assets.slack.com&lt;&#x2F;span&gt;&lt;span&gt;&amp;quot;,
&lt;&#x2F;span&gt;&lt;span&gt;    &amp;quot;&lt;&#x2F;span&gt;&lt;span style=&quot;color:#a3be8c;&quot;&gt;avatars.slack-edge.com&lt;&#x2F;span&gt;&lt;span&gt;&amp;quot;,
&lt;&#x2F;span&gt;&lt;span&gt;    &amp;quot;&lt;&#x2F;span&gt;&lt;span style=&quot;color:#a3be8c;&quot;&gt;b.slack-edge.com&lt;&#x2F;span&gt;&lt;span&gt;&amp;quot;,
&lt;&#x2F;span&gt;&lt;span&gt;    &amp;quot;&lt;&#x2F;span&gt;&lt;span style=&quot;color:#a3be8c;&quot;&gt;b.slack-imgs.com&lt;&#x2F;span&gt;&lt;span&gt;&amp;quot;,
&lt;&#x2F;span&gt;&lt;span&gt;    &amp;quot;&lt;&#x2F;span&gt;&lt;span style=&quot;color:#a3be8c;&quot;&gt;beta.slack.com&lt;&#x2F;span&gt;&lt;span&gt;&amp;quot;,
&lt;&#x2F;span&gt;&lt;span&gt;    &amp;quot;&lt;&#x2F;span&gt;&lt;span style=&quot;color:#a3be8c;&quot;&gt;blog.slack.com&lt;&#x2F;span&gt;&lt;span&gt;&amp;quot;,
&lt;&#x2F;span&gt;&lt;span&gt;    &amp;quot;&lt;&#x2F;span&gt;&lt;span style=&quot;color:#a3be8c;&quot;&gt;ca.slack-edge.com&lt;&#x2F;span&gt;&lt;span&gt;&amp;quot;,
&lt;&#x2F;span&gt;&lt;span&gt;    &amp;quot;&lt;&#x2F;span&gt;&lt;span style=&quot;color:#a3be8c;&quot;&gt;cloud-native.slack.com&lt;&#x2F;span&gt;&lt;span&gt;&amp;quot;,
&lt;&#x2F;span&gt;&lt;span&gt;    &amp;quot;&lt;&#x2F;span&gt;&lt;span style=&quot;color:#a3be8c;&quot;&gt;downloads.slack-edge.com&lt;&#x2F;span&gt;&lt;span&gt;&amp;quot;,
&lt;&#x2F;span&gt;&lt;span&gt;    &amp;quot;&lt;&#x2F;span&gt;&lt;span style=&quot;color:#a3be8c;&quot;&gt;edgeapi.slack.com&lt;&#x2F;span&gt;&lt;span&gt;&amp;quot;,
&lt;&#x2F;span&gt;&lt;span&gt;    &amp;quot;&lt;&#x2F;span&gt;&lt;span style=&quot;color:#a3be8c;&quot;&gt;email.slack.com&lt;&#x2F;span&gt;&lt;span&gt;&amp;quot;,
&lt;&#x2F;span&gt;&lt;span&gt;    &amp;quot;&lt;&#x2F;span&gt;&lt;span style=&quot;color:#a3be8c;&quot;&gt;email2.slack.com&lt;&#x2F;span&gt;&lt;span&gt;&amp;quot;,
&lt;&#x2F;span&gt;&lt;span&gt;    &amp;quot;&lt;&#x2F;span&gt;&lt;span style=&quot;color:#a3be8c;&quot;&gt;email3.slack.com&lt;&#x2F;span&gt;&lt;span&gt;&amp;quot;,
&lt;&#x2F;span&gt;&lt;span&gt;    &amp;quot;&lt;&#x2F;span&gt;&lt;span style=&quot;color:#a3be8c;&quot;&gt;email4.slack.com&lt;&#x2F;span&gt;&lt;span&gt;&amp;quot;,
&lt;&#x2F;span&gt;&lt;span&gt;    &amp;quot;&lt;&#x2F;span&gt;&lt;span style=&quot;color:#a3be8c;&quot;&gt;emoji.slack-edge.com&lt;&#x2F;span&gt;&lt;span&gt;&amp;quot;,
&lt;&#x2F;span&gt;&lt;span&gt;    &amp;quot;&lt;&#x2F;span&gt;&lt;span style=&quot;color:#a3be8c;&quot;&gt;example.slack.com&lt;&#x2F;span&gt;&lt;span&gt;&amp;quot;,
&lt;&#x2F;span&gt;&lt;span&gt;    &amp;quot;&lt;&#x2F;span&gt;&lt;span style=&quot;color:#a3be8c;&quot;&gt;files-edge.slack.com&lt;&#x2F;span&gt;&lt;span&gt;&amp;quot;,
&lt;&#x2F;span&gt;&lt;span&gt;    &amp;quot;&lt;&#x2F;span&gt;&lt;span style=&quot;color:#a3be8c;&quot;&gt;files-origin.slack.com&lt;&#x2F;span&gt;&lt;span&gt;&amp;quot;,
&lt;&#x2F;span&gt;&lt;span&gt;    &amp;quot;&lt;&#x2F;span&gt;&lt;span style=&quot;color:#a3be8c;&quot;&gt;files.slack.com&lt;&#x2F;span&gt;&lt;span&gt;&amp;quot;,
&lt;&#x2F;span&gt;&lt;span&gt;    &amp;quot;&lt;&#x2F;span&gt;&lt;span style=&quot;color:#a3be8c;&quot;&gt;global-upload-edge.slack.com&lt;&#x2F;span&gt;&lt;span&gt;&amp;quot;,
&lt;&#x2F;span&gt;&lt;span&gt;    &amp;quot;&lt;&#x2F;span&gt;&lt;span style=&quot;color:#a3be8c;&quot;&gt;go-beta.slack.com&lt;&#x2F;span&gt;&lt;span&gt;&amp;quot;,
&lt;&#x2F;span&gt;&lt;span&gt;    &amp;quot;&lt;&#x2F;span&gt;&lt;span style=&quot;color:#a3be8c;&quot;&gt;go-debug.slack.com&lt;&#x2F;span&gt;&lt;span&gt;&amp;quot;,
&lt;&#x2F;span&gt;&lt;span&gt;    &amp;quot;&lt;&#x2F;span&gt;&lt;span style=&quot;color:#a3be8c;&quot;&gt;go.slack.com&lt;&#x2F;span&gt;&lt;span&gt;&amp;quot;,
&lt;&#x2F;span&gt;&lt;span&gt;    &amp;quot;&lt;&#x2F;span&gt;&lt;span style=&quot;color:#a3be8c;&quot;&gt;help.slack.com&lt;&#x2F;span&gt;&lt;span&gt;&amp;quot;,
&lt;&#x2F;span&gt;&lt;span&gt;    &amp;quot;&lt;&#x2F;span&gt;&lt;span style=&quot;color:#a3be8c;&quot;&gt;hooks.slack.com&lt;&#x2F;span&gt;&lt;span&gt;&amp;quot;,
&lt;&#x2F;span&gt;&lt;span&gt;    &amp;quot;&lt;&#x2F;span&gt;&lt;span style=&quot;color:#a3be8c;&quot;&gt;join.slack.com&lt;&#x2F;span&gt;&lt;span&gt;&amp;quot;,
&lt;&#x2F;span&gt;&lt;span&gt;    &amp;quot;&lt;&#x2F;span&gt;&lt;span style=&quot;color:#a3be8c;&quot;&gt;my.slack.com&lt;&#x2F;span&gt;&lt;span&gt;&amp;quot;,
&lt;&#x2F;span&gt;&lt;span&gt;    &amp;quot;&lt;&#x2F;span&gt;&lt;span style=&quot;color:#a3be8c;&quot;&gt;oauth2.slack.com&lt;&#x2F;span&gt;&lt;span&gt;&amp;quot;,
&lt;&#x2F;span&gt;&lt;span&gt;    &amp;quot;&lt;&#x2F;span&gt;&lt;span style=&quot;color:#a3be8c;&quot;&gt;platform-tls-client.slack.com&lt;&#x2F;span&gt;&lt;span&gt;&amp;quot;,
&lt;&#x2F;span&gt;&lt;span&gt;    &amp;quot;&lt;&#x2F;span&gt;&lt;span style=&quot;color:#a3be8c;&quot;&gt;platform.slack-edge.com&lt;&#x2F;span&gt;&lt;span&gt;&amp;quot;,
&lt;&#x2F;span&gt;&lt;span&gt;    &amp;quot;&lt;&#x2F;span&gt;&lt;span style=&quot;color:#a3be8c;&quot;&gt;slack-email.slack.com&lt;&#x2F;span&gt;&lt;span&gt;&amp;quot;,
&lt;&#x2F;span&gt;&lt;span&gt;    &amp;quot;&lt;&#x2F;span&gt;&lt;span style=&quot;color:#a3be8c;&quot;&gt;slack-files.com&lt;&#x2F;span&gt;&lt;span&gt;&amp;quot;,
&lt;&#x2F;span&gt;&lt;span&gt;    &amp;quot;&lt;&#x2F;span&gt;&lt;span style=&quot;color:#a3be8c;&quot;&gt;slack-imgs.com&lt;&#x2F;span&gt;&lt;span&gt;&amp;quot;,
&lt;&#x2F;span&gt;&lt;span&gt;    &amp;quot;&lt;&#x2F;span&gt;&lt;span style=&quot;color:#a3be8c;&quot;&gt;slack-infra-canvas.slack.com&lt;&#x2F;span&gt;&lt;span&gt;&amp;quot;,
&lt;&#x2F;span&gt;&lt;span&gt;    &amp;quot;&lt;&#x2F;span&gt;&lt;span style=&quot;color:#a3be8c;&quot;&gt;slack-infra.slack.com&lt;&#x2F;span&gt;&lt;span&gt;&amp;quot;,
&lt;&#x2F;span&gt;&lt;span&gt;    &amp;quot;&lt;&#x2F;span&gt;&lt;span style=&quot;color:#a3be8c;&quot;&gt;slack.com&lt;&#x2F;span&gt;&lt;span&gt;&amp;quot;,
&lt;&#x2F;span&gt;&lt;span&gt;    &amp;quot;&lt;&#x2F;span&gt;&lt;span style=&quot;color:#a3be8c;&quot;&gt;slack.global.ssl.fastly.net&lt;&#x2F;span&gt;&lt;span&gt;&amp;quot;,
&lt;&#x2F;span&gt;&lt;span&gt;    &amp;quot;&lt;&#x2F;span&gt;&lt;span style=&quot;color:#a3be8c;&quot;&gt;slackb.com&lt;&#x2F;span&gt;&lt;span&gt;&amp;quot;,
&lt;&#x2F;span&gt;&lt;span&gt;    &amp;quot;&lt;&#x2F;span&gt;&lt;span style=&quot;color:#a3be8c;&quot;&gt;spellcheck.slack-edge.com&lt;&#x2F;span&gt;&lt;span&gt;&amp;quot;,
&lt;&#x2F;span&gt;&lt;span&gt;    &amp;quot;&lt;&#x2F;span&gt;&lt;span style=&quot;color:#a3be8c;&quot;&gt;status.slack.com&lt;&#x2F;span&gt;&lt;span&gt;&amp;quot;,
&lt;&#x2F;span&gt;&lt;span&gt;    &amp;quot;&lt;&#x2F;span&gt;&lt;span style=&quot;color:#a3be8c;&quot;&gt;try.slack.com&lt;&#x2F;span&gt;&lt;span&gt;&amp;quot;,
&lt;&#x2F;span&gt;&lt;span&gt;    &amp;quot;&lt;&#x2F;span&gt;&lt;span style=&quot;color:#a3be8c;&quot;&gt;universal-upload-edge.slack.com&lt;&#x2F;span&gt;&lt;span&gt;&amp;quot;,
&lt;&#x2F;span&gt;&lt;span&gt;    &amp;quot;&lt;&#x2F;span&gt;&lt;span style=&quot;color:#a3be8c;&quot;&gt;upload.slack.com&lt;&#x2F;span&gt;&lt;span&gt;&amp;quot;,
&lt;&#x2F;span&gt;&lt;span&gt;    &amp;quot;&lt;&#x2F;span&gt;&lt;span style=&quot;color:#a3be8c;&quot;&gt;wss-backup.slack.com&lt;&#x2F;span&gt;&lt;span&gt;&amp;quot;,
&lt;&#x2F;span&gt;&lt;span&gt;    &amp;quot;&lt;&#x2F;span&gt;&lt;span style=&quot;color:#a3be8c;&quot;&gt;wss-mobile.slack.com&lt;&#x2F;span&gt;&lt;span&gt;&amp;quot;,
&lt;&#x2F;span&gt;&lt;span&gt;    &amp;quot;&lt;&#x2F;span&gt;&lt;span style=&quot;color:#a3be8c;&quot;&gt;wss-primary.slack.com&lt;&#x2F;span&gt;&lt;span&gt;&amp;quot;
&lt;&#x2F;span&gt;&lt;span&gt;]
&lt;&#x2F;span&gt;&lt;&#x2F;code&gt;&lt;&#x2F;pre&gt;
&lt;p&gt;Looks like they use chime under the hood for their “huddles”. So lets get the correct information for the&lt;&#x2F;p&gt;
&lt;p&gt;https:&#x2F;&#x2F;docs.aws.amazon.com&#x2F;chime&#x2F;latest&#x2F;ag&#x2F;network-config.html
https:&#x2F;&#x2F;answers.chime.aws&#x2F;articles&#x2F;123&#x2F;hosts-ports-and-protocols-needed-for-amazon-chime.html&lt;&#x2F;p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;Amazon Chime Meetings, Chat, and Business Calling&lt;&#x2F;strong&gt; uses 99.77.128.0&#x2F;18 TCP&#x2F;443 UDP&#x2F;3478&lt;&#x2F;li&gt;
&lt;&#x2F;ul&gt;
&lt;h3 id=&quot;ms-teams&quot;&gt;MS Teams&lt;&#x2F;h3&gt;
&lt;p&gt;Of course this would be black hole of vague redirects to find the actual information. What a shitshow MS.  Also, wish I could just set a domain fore these fools.
https:&#x2F;&#x2F;learn.microsoft.com&#x2F;en-us&#x2F;microsoft-365&#x2F;enterprise&#x2F;urls-and-ip-address-ranges?view=o365-worldwide&lt;&#x2F;p&gt;
&lt;h2 id=&quot;skype-for-business-online-and-microsoft-teams&quot;&gt;Skype for Business Online and Microsoft Teams&lt;&#x2F;h2&gt;
&lt;table&gt;&lt;thead&gt;&lt;tr&gt;&lt;th&gt;ID&lt;&#x2F;th&gt;&lt;th&gt;Category&lt;&#x2F;th&gt;&lt;th&gt;ER&lt;&#x2F;th&gt;&lt;th&gt;Addresses&lt;&#x2F;th&gt;&lt;th&gt;Ports&lt;&#x2F;th&gt;&lt;&#x2F;tr&gt;&lt;&#x2F;thead&gt;&lt;tbody&gt;
&lt;tr&gt;&lt;td&gt;11&lt;&#x2F;td&gt;&lt;td&gt;Optimize  &lt;br&gt;Required&lt;&#x2F;td&gt;&lt;td&gt;Yes&lt;&#x2F;td&gt;&lt;td&gt;&lt;code&gt;13.107.64.0&#x2F;18, 52.112.0.0&#x2F;14, 52.122.0.0&#x2F;15, 2603:1063::&#x2F;38&lt;&#x2F;code&gt;&lt;&#x2F;td&gt;&lt;td&gt;&lt;strong&gt;UDP:&lt;&#x2F;strong&gt; 3478, 3479, 3480, 3481&lt;&#x2F;td&gt;&lt;&#x2F;tr&gt;
&lt;tr&gt;&lt;td&gt;12&lt;&#x2F;td&gt;&lt;td&gt;Allow  &lt;br&gt;Required&lt;&#x2F;td&gt;&lt;td&gt;Yes&lt;&#x2F;td&gt;&lt;td&gt;&lt;code&gt;*.lync.com, *.teams.microsoft.com, teams.microsoft.com&lt;&#x2F;code&gt;  &lt;br&gt;&lt;code&gt;13.107.64.0&#x2F;18, 52.112.0.0&#x2F;14, 52.122.0.0&#x2F;15, 52.238.119.141&#x2F;32, 52.244.160.207&#x2F;32, 2603:1027::&#x2F;48, 2603:1037::&#x2F;48, 2603:1047::&#x2F;48, 2603:1057::&#x2F;48, 2603:1063::&#x2F;38, 2620:1ec:6::&#x2F;48, 2620:1ec:40::&#x2F;42&lt;&#x2F;code&gt;&lt;&#x2F;td&gt;&lt;td&gt;&lt;strong&gt;TCP:&lt;&#x2F;strong&gt; 443, 80&lt;&#x2F;td&gt;&lt;&#x2F;tr&gt;
&lt;tr&gt;&lt;td&gt;16&lt;&#x2F;td&gt;&lt;td&gt;Default  &lt;br&gt;Required&lt;&#x2F;td&gt;&lt;td&gt;No&lt;&#x2F;td&gt;&lt;td&gt;&lt;code&gt;*.keydelivery.mediaservices.windows.net, *.streaming.mediaservices.windows.net, mlccdn.blob.core.windows.net&lt;&#x2F;code&gt;&lt;&#x2F;td&gt;&lt;td&gt;&lt;strong&gt;TCP:&lt;&#x2F;strong&gt; 443&lt;&#x2F;td&gt;&lt;&#x2F;tr&gt;
&lt;tr&gt;&lt;td&gt;17&lt;&#x2F;td&gt;&lt;td&gt;Default  &lt;br&gt;Required&lt;&#x2F;td&gt;&lt;td&gt;No&lt;&#x2F;td&gt;&lt;td&gt;&lt;code&gt;aka.ms&lt;&#x2F;code&gt;&lt;&#x2F;td&gt;&lt;td&gt;&lt;strong&gt;TCP:&lt;&#x2F;strong&gt; 443&lt;&#x2F;td&gt;&lt;&#x2F;tr&gt;
&lt;tr&gt;&lt;td&gt;18&lt;&#x2F;td&gt;&lt;td&gt;Default  &lt;br&gt;Optional  &lt;br&gt;&lt;strong&gt;Notes:&lt;&#x2F;strong&gt; Federation with Skype and public IM connectivity: Contact picture retrieval&lt;&#x2F;td&gt;&lt;td&gt;No&lt;&#x2F;td&gt;&lt;td&gt;&lt;code&gt;*.users.storage.live.com&lt;&#x2F;code&gt;&lt;&#x2F;td&gt;&lt;td&gt;&lt;strong&gt;TCP:&lt;&#x2F;strong&gt; 443&lt;&#x2F;td&gt;&lt;&#x2F;tr&gt;
&lt;tr&gt;&lt;td&gt;19&lt;&#x2F;td&gt;&lt;td&gt;Default  &lt;br&gt;Optional  &lt;br&gt;&lt;strong&gt;Notes:&lt;&#x2F;strong&gt; Applies only to those who deploy the Conference Room Systems&lt;&#x2F;td&gt;&lt;td&gt;No&lt;&#x2F;td&gt;&lt;td&gt;&lt;code&gt;adl.windows.com&lt;&#x2F;code&gt;&lt;&#x2F;td&gt;&lt;td&gt;&lt;strong&gt;TCP:&lt;&#x2F;strong&gt; 443, 80&lt;&#x2F;td&gt;&lt;&#x2F;tr&gt;
&lt;tr&gt;&lt;td&gt;27&lt;&#x2F;td&gt;&lt;td&gt;Default  &lt;br&gt;Required&lt;&#x2F;td&gt;&lt;td&gt;No&lt;&#x2F;td&gt;&lt;td&gt;&lt;code&gt;*.secure.skypeassets.com, mlccdnprod.azureedge.net&lt;&#x2F;code&gt;&lt;&#x2F;td&gt;&lt;td&gt;&lt;strong&gt;TCP:&lt;&#x2F;strong&gt; 443&lt;&#x2F;td&gt;&lt;&#x2F;tr&gt;
&lt;tr&gt;&lt;td&gt;127&lt;&#x2F;td&gt;&lt;td&gt;Default  &lt;br&gt;Required&lt;&#x2F;td&gt;&lt;td&gt;No&lt;&#x2F;td&gt;&lt;td&gt;&lt;code&gt;*.skype.com&lt;&#x2F;code&gt;&lt;&#x2F;td&gt;&lt;td&gt;&lt;strong&gt;TCP:&lt;&#x2F;strong&gt; 443, 80&lt;&#x2F;td&gt;&lt;&#x2F;tr&gt;
&lt;tr&gt;&lt;td&gt;180&lt;&#x2F;td&gt;&lt;td&gt;Default  &lt;br&gt;Required&lt;&#x2F;td&gt;&lt;td&gt;No&lt;&#x2F;td&gt;&lt;td&gt;&lt;code&gt;compass-ssl.microsoft.com&lt;&#x2F;code&gt;&lt;&#x2F;td&gt;&lt;td&gt;&lt;strong&gt;TCP:&lt;&#x2F;strong&gt; 443&lt;&#x2F;td&gt;&lt;&#x2F;tr&gt;
&lt;&#x2F;tbody&gt;&lt;&#x2F;table&gt;
&lt;h3 id=&quot;apple&quot;&gt;Apple&lt;&#x2F;h3&gt;
&lt;p&gt;https:&#x2F;&#x2F;support.apple.com&#x2F;en-us&#x2F;HT202944
https:&#x2F;&#x2F;support.apple.com&#x2F;en-us&#x2F;102036&lt;&#x2F;p&gt;
&lt;p&gt;I do a bunch of RDP to headless machines, so need to give those ports some priority.&lt;&#x2F;p&gt;
&lt;table&gt;&lt;thead&gt;&lt;tr&gt;&lt;th&gt;&lt;&#x2F;th&gt;&lt;th&gt;&lt;&#x2F;th&gt;&lt;th&gt;&lt;&#x2F;th&gt;&lt;th&gt;&lt;&#x2F;th&gt;&lt;th&gt;&lt;&#x2F;th&gt;&lt;th&gt;&lt;&#x2F;th&gt;&lt;&#x2F;tr&gt;&lt;&#x2F;thead&gt;&lt;tbody&gt;
&lt;tr&gt;&lt;td&gt;16384–16403&lt;&#x2F;td&gt;&lt;td&gt;UDP&lt;&#x2F;td&gt;&lt;td&gt;Real-Time Transport Protocol (RTP), Real-Time Control Protocol (RTCP)&lt;&#x2F;td&gt;&lt;td&gt;—&lt;&#x2F;td&gt;&lt;td&gt;connected, —&lt;&#x2F;td&gt;&lt;td&gt;Messages (Audio RTP, RTCP; Video RTP, RTCP)&lt;&#x2F;td&gt;&lt;&#x2F;tr&gt;
&lt;tr&gt;&lt;td&gt;16384–16387&lt;&#x2F;td&gt;&lt;td&gt;UDP&lt;&#x2F;td&gt;&lt;td&gt;Real-Time Transport Protocol (RTP), Real-Time Control Protocol (RTCP)&lt;&#x2F;td&gt;&lt;td&gt;—&lt;&#x2F;td&gt;&lt;td&gt;connected, —&lt;&#x2F;td&gt;&lt;td&gt;FaceTime, Game Center&lt;&#x2F;td&gt;&lt;&#x2F;tr&gt;
&lt;tr&gt;&lt;td&gt;16393–16402&lt;&#x2F;td&gt;&lt;td&gt;UDP&lt;&#x2F;td&gt;&lt;td&gt;Real-Time Transport Protocol (RTP), Real-Time Control Protocol (RTCP)&lt;&#x2F;td&gt;&lt;td&gt;—&lt;&#x2F;td&gt;&lt;td&gt;—&lt;&#x2F;td&gt;&lt;td&gt;FaceTime, Game Center&lt;&#x2F;td&gt;&lt;&#x2F;tr&gt;
&lt;tr&gt;&lt;td&gt;16403–16472&lt;&#x2F;td&gt;&lt;td&gt;UDP&lt;&#x2F;td&gt;&lt;td&gt;Real-Time Transport Protocol (RTP), Real-Time Control Protocol (RTCP)&lt;&#x2F;td&gt;&lt;td&gt;—&lt;&#x2F;td&gt;&lt;td&gt;—&lt;&#x2F;td&gt;&lt;td&gt;Game Center&lt;&#x2F;td&gt;&lt;&#x2F;tr&gt;
&lt;tr&gt;&lt;td&gt;5223&lt;&#x2F;td&gt;&lt;td&gt;TCP&lt;&#x2F;td&gt;&lt;td&gt;Apple Push Notification Service (APNS)&lt;&#x2F;td&gt;&lt;td&gt;—&lt;&#x2F;td&gt;&lt;td&gt;—&lt;&#x2F;td&gt;&lt;td&gt;iCloud DAV Services (Contacts, Calendars, Bookmarks), &lt;a href=&quot;https:&#x2F;&#x2F;support.apple.com&#x2F;kb&#x2F;HT203609&quot;&gt;Push Notifications&lt;&#x2F;a&gt;, FaceTime, iMessage, Game Center, Photo Stream&lt;&#x2F;td&gt;&lt;&#x2F;tr&gt;
&lt;tr&gt;&lt;td&gt;3478–3497&lt;&#x2F;td&gt;&lt;td&gt;UDP&lt;&#x2F;td&gt;&lt;td&gt;—&lt;&#x2F;td&gt;&lt;td&gt;—&lt;&#x2F;td&gt;&lt;td&gt;nat-stun-port - ipether232port&lt;&#x2F;td&gt;&lt;td&gt;FaceTime, Game Center&lt;&#x2F;td&gt;&lt;&#x2F;tr&gt;
&lt;tr&gt;&lt;td&gt;3283&lt;&#x2F;td&gt;&lt;td&gt;TCP&#x2F;UDP&lt;&#x2F;td&gt;&lt;td&gt;Apple Remote Desktop and Classroom&lt;&#x2F;td&gt;&lt;td&gt;—&lt;&#x2F;td&gt;&lt;td&gt;net-assistant, classroom&lt;&#x2F;td&gt;&lt;td&gt;Apple Remote Desktop&lt;&#x2F;td&gt;&lt;&#x2F;tr&gt;
&lt;tr&gt;&lt;td&gt;5900&lt;&#x2F;td&gt;&lt;td&gt;TCP&lt;&#x2F;td&gt;&lt;td&gt;Remote Framebuffer&lt;&#x2F;td&gt;&lt;td&gt;6143&lt;&#x2F;td&gt;&lt;td&gt;rfb&lt;&#x2F;td&gt;&lt;td&gt;Apple Remote Desktop, Screen Sharing&lt;&#x2F;td&gt;&lt;&#x2F;tr&gt;
&lt;tr&gt;&lt;td&gt;5900&lt;&#x2F;td&gt;&lt;td&gt;UDP&lt;&#x2F;td&gt;&lt;td&gt;Remote Framebuffer,  Real-Time Transport Protocol (RTP), Real-Time Control Protocol (RTCP)&lt;&#x2F;td&gt;&lt;td&gt;—&lt;&#x2F;td&gt;&lt;td&gt;—&lt;&#x2F;td&gt;&lt;td&gt;Apple Remote Desktop, Screen Sharing&lt;&#x2F;td&gt;&lt;&#x2F;tr&gt;
&lt;tr&gt;&lt;td&gt;5901–5902&lt;&#x2F;td&gt;&lt;td&gt;UDP&lt;&#x2F;td&gt;&lt;td&gt;Real-Time Transport Protocol (RTP), Real-Time Control Protocol (RTCP)&lt;&#x2F;td&gt;&lt;td&gt;—&lt;&#x2F;td&gt;&lt;td&gt;—&lt;&#x2F;td&gt;&lt;td&gt;Apple Remote Desktop, Screen Sharing&lt;&#x2F;td&gt;&lt;&#x2F;tr&gt;
&lt;&#x2F;tbody&gt;&lt;&#x2F;table&gt;
</description>
      </item>
      <item>
          <title>All the cool kids are running NixOS</title>
          <pubDate>Fri, 12 Jan 2024 00:22:11 -0700</pubDate>
          <author>Unknown</author>
          <link>https://kcirtapfromspace.github.io/kcirtap-blog/posts/all-the-cool-kids-are-running-nixos/</link>
          <guid>https://kcirtapfromspace.github.io/kcirtap-blog/posts/all-the-cool-kids-are-running-nixos/</guid>
          <description xml:base="https://kcirtapfromspace.github.io/kcirtap-blog/posts/all-the-cool-kids-are-running-nixos/">&lt;h1 id=&quot;all-the-cool-kids-are-running-nixos&quot;&gt;All the cool kids are running NixOS&lt;&#x2F;h1&gt;
&lt;p&gt;My first intro to NixOS was a demo that an engineer ran through in my companies DevOps guild. My first thoughts are and still mainly consist of why are you not just dumping that into a container which also provides a deterministic, reproducible output. In general, I put NixOS on the back burner as I explored the funky town that is buildpacks novelty government cloud provides. I didn’t think much else of it for a couple of months.&lt;&#x2F;p&gt;
&lt;p&gt;Over in a homelab thread NixOS popped up again, this time in the context of using NixOS to manage Raspberry 5 configuration. I revisited some GitHub links but fell disappointed, just don’t think there is much traction there quite yet.&lt;&#x2F;p&gt;
&lt;p&gt;Then I was exposed to some humble nerd bragging, someone threw down the words:&lt;&#x2F;p&gt;
&lt;p&gt;“Very please with my dev machine these days. Apple Silicon Mac + nix-darwin + UTM(for Linux VMs). Nix flakes for dev environments&#x2F; reproducibility is so pleasant. Ghostty is already a solid terminal”&lt;&#x2F;p&gt;
&lt;p&gt;There was a lot that I needed to process with this declaration&lt;&#x2F;p&gt;
&lt;ul&gt;
&lt;li&gt;Ghostty 👻 - that’s still closed beta by &lt;a href=&quot;https:&#x2F;&#x2F;mitchellh.com&#x2F;&quot;&gt;Mitchell Hashimoto&lt;&#x2F;a&gt; I’d love to try that out.&lt;&#x2F;li&gt;
&lt;li&gt;My Dev Machine&#x2F;s have some sprawl that I would love to sort out. Some declarative steamroller would be nice ☠️&lt;&#x2F;li&gt;
&lt;li&gt;I’ll need to Google &lt;a href=&quot;https:&#x2F;&#x2F;getutm.app&#x2F;&quot;&gt;UTM&lt;&#x2F;a&gt; cause I’m still terrible at acronyms&lt;&#x2F;li&gt;
&lt;li&gt;Nix has popped up again, how far off could it be from Ansible&lt;&#x2F;li&gt;
&lt;&#x2F;ul&gt;
&lt;p&gt;The quick follow answers:&lt;&#x2F;p&gt;
&lt;p&gt;Ghostty - yup still closed beta. Go watch the discord to get access beta
Let’s fix developer sprawl with nix-darwin.
Oh boy i’m a little confused to what’s happening with nix-darwin, flakes, home-manager
I’ll figure out UTM when I get nix-darwin sorted.&lt;&#x2F;p&gt;
&lt;h2 id=&quot;getting-started-with-nix-darwin&quot;&gt;Getting started with &lt;a href=&quot;https:&#x2F;&#x2F;github.com&#x2F;LnL7&#x2F;nix-darwin&quot;&gt;nix-darwin&lt;&#x2F;a&gt;&lt;&#x2F;h2&gt;
&lt;h3 id=&quot;baby-steps&quot;&gt;Baby Steps&lt;&#x2F;h3&gt;
&lt;p&gt;Get the TLDR and in-depth guide to get your feet wet with NixOS&lt;&#x2F;p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href=&quot;https:&#x2F;&#x2F;nixos-and-flakes.thiscute.world&#x2F;&quot;&gt;nixos-and-flakes.thiscute.world&lt;&#x2F;a&gt;&lt;&#x2F;li&gt;
&lt;li&gt;&lt;a href=&quot;https:&#x2F;&#x2F;zero-to-nix.com&#x2F;&quot;&gt;zero-to-nix.com&lt;&#x2F;a&gt;&lt;&#x2F;li&gt;
&lt;li&gt;&lt;a href=&quot;https:&#x2F;&#x2F;nix.dev&#x2F;tutorials&#x2F;nix-language&quot;&gt;nix.dev - tutorials&lt;&#x2F;a&gt;
Use Determinate Systems to install nix safely on MaxOS
&lt;a href=&quot;https:&#x2F;&#x2F;determinate.systems&#x2F;posts&#x2F;nix-survival-mode-on-macos&quot;&gt;Nix Survival mode on MacOS&lt;&#x2F;a&gt;&lt;&#x2F;li&gt;
&lt;&#x2F;ul&gt;
&lt;p&gt;Search Packages: &lt;a href=&quot;https:&#x2F;&#x2F;search.nixos.org&#x2F;packages?channel=23.05&quot;&gt;search.nixos.org&lt;&#x2F;a&gt;&lt;&#x2F;p&gt;
&lt;h3 id=&quot;useful-takes-on-setup&quot;&gt;Useful takes on setup&lt;&#x2F;h3&gt;
&lt;p&gt;I perused a bunch of these trying to get a foundation on how various people interpreted using flakes, home-manager, or the likes.&lt;&#x2F;p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href=&quot;https:&#x2F;&#x2F;yusef.napora.org&#x2F;blog&#x2F;nixos-asahi&#x2F;&quot;&gt;yusef.napora.org - nixos-asahi&lt;&#x2F;a&gt;&lt;&#x2F;li&gt;
&lt;li&gt;&lt;a href=&quot;https:&#x2F;&#x2F;br0g.0brg.net&#x2F;nix.html&quot;&gt;br0g.0brg.net - nix&lt;&#x2F;a&gt;&lt;&#x2F;li&gt;
&lt;li&gt;&lt;a href=&quot;https:&#x2F;&#x2F;krisztianfekete.org&#x2F;nixos-on-apple-silicon-with-utm&#x2F;&quot;&gt;krisztianfekete.org - nixos on-apple silicon with utm&lt;&#x2F;a&gt;&lt;&#x2F;li&gt;
&lt;li&gt;&lt;a href=&quot;https:&#x2F;&#x2F;github.com&#x2F;srid&#x2F;nixos-config&#x2F;blob&#x2F;master&#x2F;home&#x2F;neovim.lua&quot;&gt;srid - nixos-config&lt;&#x2F;a&gt;&lt;&#x2F;li&gt;
&lt;li&gt;&lt;a href=&quot;https:&#x2F;&#x2F;nixcademy.com&#x2F;2024&#x2F;01&#x2F;15&#x2F;nix-on-macos&#x2F;&quot;&gt;nixcademy.com - nix on mac&lt;&#x2F;a&gt;&lt;&#x2F;li&gt;
&lt;&#x2F;ul&gt;
&lt;h3 id=&quot;get-lost-on-what-to-do-next&quot;&gt;Get lost on what to do next&lt;&#x2F;h3&gt;
&lt;p&gt;Figure out how to install some packages, find some examples
https:&#x2F;&#x2F;github.com&#x2F;ryan4yin&#x2F;nix-darwin-kickstarter&#x2F;blob&#x2F;main&#x2F;rich-demo&#x2F;flake.nix&lt;&#x2F;p&gt;
&lt;p&gt;Go get overwhelmed with Mitchell’s setup: https:&#x2F;&#x2F;github.com&#x2F;mitchellh&#x2F;nixos-config&lt;&#x2F;p&gt;
&lt;p&gt;Modified take on Mitchells: https:&#x2F;&#x2F;github.com&#x2F;cor&#x2F;nixos-config&#x2F;tree&#x2F;master&lt;&#x2F;p&gt;
&lt;p&gt;Guess I’ll setup cachix? I’ll figure out the meaning of this a little later https:&#x2F;&#x2F;app.cachix.org&#x2F;cache&#x2F;kcirtapfromspace-nixos-config#pull&lt;&#x2F;p&gt;
&lt;h4 id=&quot;mitchell-s-config&quot;&gt;Mitchell’s config&lt;&#x2F;h4&gt;
&lt;p&gt;Growing up I had typing class. Once or twice a week we would go to this dark room filled with computers, we got the joy of playing a speed typing game in silence for like an hour. This mild abuse to those of short attention spans feel like what i’m going through with Mitchells setup.&lt;&#x2F;p&gt;
&lt;ul&gt;
&lt;li&gt;linux - well its been a decade since I ran a desktop&lt;&#x2F;li&gt;
&lt;li&gt;kitty&#x2F;alacritty - new terminals i’ve been on iterm2 for a hot minute these days&lt;&#x2F;li&gt;
&lt;li&gt;fish - this seems fine, until copy&#x2F;paste doen’t seem to work and your on page 12 trying to figure out if key bindings are all change&lt;&#x2F;li&gt;
&lt;li&gt;neovim - oh man what have I gotten into. I feel like all the simple things are now complicated again. At this point I’m afraid to ask.
&lt;ul&gt;
&lt;li&gt;oh plugin hell, what are all these things - treesitter, lua, telescope, etc.
&lt;ul&gt;
&lt;li&gt;great they’re installed now how do I create a branch&lt;&#x2F;li&gt;
&lt;&#x2F;ul&gt;
&lt;&#x2F;li&gt;
&lt;&#x2F;ul&gt;
&lt;&#x2F;li&gt;
&lt;li&gt;raycast - neat its another &lt;code&gt;Command - spacebar&lt;&#x2F;code&gt; thing. I assume this is needed as nix doesn’t install applications into the Application dir.&lt;&#x2F;li&gt;
&lt;li&gt;tmux - so why do none &lt;code&gt;Control - b + %&lt;&#x2F;code&gt; things work?&lt;&#x2F;li&gt;
&lt;&#x2F;ul&gt;
&lt;h3 id=&quot;cachix&quot;&gt;Cachix&lt;&#x2F;h3&gt;
&lt;p&gt;&lt;a href=&quot;https:&#x2F;&#x2F;app.cachix.org&quot;&gt;cachix.org&lt;&#x2F;a&gt;&lt;&#x2F;p&gt;
&lt;p&gt;Guess I’ll setup cachix? I’ll figure out the meaning of this a little later. Though, this might another rabbit hole that is too much for me to think through completely.&lt;br &#x2F;&gt;
&lt;a href=&quot;https:&#x2F;&#x2F;app.cachix.org&#x2F;cache&#x2F;kcirtapfromspace-nixos-config#pull&quot;&gt;kcirtapfromspace-nixos-config&lt;&#x2F;a&gt;&lt;&#x2F;p&gt;
&lt;h4 id=&quot;personal-auth-token&quot;&gt;Personal Auth Token&lt;&#x2F;h4&gt;
&lt;p&gt;Login to cachix.org and figure out how to generate an auth token.&lt;&#x2F;p&gt;
&lt;pre data-lang=&quot;sh&quot; style=&quot;background-color:#2b303b;color:#c0c5ce;&quot; class=&quot;language-sh &quot;&gt;&lt;code class=&quot;language-sh&quot; data-lang=&quot;sh&quot;&gt;&lt;span style=&quot;color:#bf616a;&quot;&gt;❯&lt;&#x2F;span&gt;&lt;span&gt; cachix authtoken &amp;lt;magical token here&amp;gt;
&lt;&#x2F;span&gt;&lt;span style=&quot;color:#bf616a;&quot;&gt;Written&lt;&#x2F;span&gt;&lt;span&gt; to &#x2F;Users&#x2F;thinkstudio&#x2F;.config&#x2F;cachix&#x2F;cachix.dhall
&lt;&#x2F;span&gt;&lt;&#x2F;code&gt;&lt;&#x2F;pre&gt;
&lt;p&gt;After cachix is setup you’ll see the logs&lt;&#x2F;p&gt;
&lt;pre data-lang=&quot;sh&quot; style=&quot;background-color:#2b303b;color:#c0c5ce;&quot; class=&quot;language-sh &quot;&gt;&lt;code class=&quot;language-sh&quot; data-lang=&quot;sh&quot;&gt;&lt;span style=&quot;color:#bf616a;&quot;&gt;copying&lt;&#x2F;span&gt;&lt;span&gt; path &amp;#39;&lt;&#x2F;span&gt;&lt;span style=&quot;color:#a3be8c;&quot;&gt;&#x2F;nix&#x2F;store&#x2F;7l8l8by558mf76vf9ngpg7lq0c8gwqby-source&lt;&#x2F;span&gt;&lt;span&gt;&amp;#39; from &amp;#39;&lt;&#x2F;span&gt;&lt;span style=&quot;color:#a3be8c;&quot;&gt;https:&#x2F;&#x2F;cache.nixos.org&lt;&#x2F;span&gt;&lt;span&gt;&amp;#39;...
&lt;&#x2F;span&gt;&lt;span style=&quot;color:#bf616a;&quot;&gt;[1&lt;&#x2F;span&gt;&lt;span&gt; copied (147.6 MiB)&lt;&#x2F;span&gt;&lt;span style=&quot;color:#bf616a;&quot;&gt;,&lt;&#x2F;span&gt;&lt;span&gt; 24.8 MiB DL] evaluating derivation &amp;#39;&lt;&#x2F;span&gt;&lt;span style=&quot;color:#a3be8c;&quot;&gt;git+file:&#x2F;&#x2F;&#x2F;Users&#x2F;thinkstudio&#x2F;.config&#x2F;nix-darwin#darwinConfigurations.thinkstudio.system&lt;&#x2F;span&gt;&lt;span&gt;&amp;#39;
&lt;&#x2F;span&gt;&lt;&#x2F;code&gt;&lt;&#x2F;pre&gt;
&lt;h2 id=&quot;virtual-machines-with-utm&quot;&gt;Virtual Machines with UTM&lt;&#x2F;h2&gt;
&lt;p&gt;https:&#x2F;&#x2F;mac.getutm.app&#x2F;
Surprise &lt;a href=&quot;https:&#x2F;&#x2F;github.com&#x2F;utmapp&#x2F;UTM&quot;&gt;UTM&lt;&#x2F;a&gt; is really just open source version of VirtualBox that works with Apple’s M1 ARM64 Architecture. As my work has led me deeper and deeper into the world of microservices and docker containers. I haven’t had a hypervisor in ages! I’ve led teams to containerize and use &lt;a href=&quot;https:&#x2F;&#x2F;code.visualstudio.com&#x2F;docs&#x2F;devcontainers&#x2F;containers&quot;&gt;.devcontainers&lt;&#x2F;a&gt; to build an immutable env that can easily be shared across a team. I can see a setting where this is a requirement to contribute in a secure means. I like the abstraction from the host machine and adds a extra buffer of security. There is also the cattle not pets mentality, once the base configuration is established for the VM, these can be cloned to infinity and used as ephemeral or persistent dedicated local environment for any dev work.&lt;&#x2F;p&gt;
&lt;h3 id=&quot;boot-up&quot;&gt;Boot up&lt;&#x2F;h3&gt;
&lt;p&gt;Boot up can be annoying as `Display Output is not Active“&lt;&#x2F;p&gt;
&lt;p&gt;Trying to get the mouse to wake up the screen sometimes does nothing, or boot time is just that slow. I feel like on the mac studio with the allocation of 32GB of ram this should be snappy fast.&lt;&#x2F;p&gt;
&lt;h3 id=&quot;isos&quot;&gt;ISOs&lt;&#x2F;h3&gt;
&lt;p&gt;Go get them&lt;&#x2F;p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href=&quot;https:&#x2F;&#x2F;nixos.org&#x2F;download#nixos-iso&quot;&gt;nixos-iso&lt;&#x2F;a&gt;&lt;&#x2F;li&gt;
&lt;li&gt;&lt;a href=&quot;https:&#x2F;&#x2F;apps.apple.com&#x2F;us&#x2F;app&#x2F;crystalfetch-iso-downloader&#x2F;id6454431289?mt=12&quot;&gt;windows-iso&lt;&#x2F;a&gt; Via CrystalFetch&lt;&#x2F;li&gt;
&lt;&#x2F;ul&gt;
&lt;h3 id=&quot;hashed-password&quot;&gt;Hashed Password&lt;&#x2F;h3&gt;
&lt;p&gt;The &lt;code&gt;nixos.nix&lt;&#x2F;code&gt; config houses a hashed password for the VM. This is a quick means to generate a compatible password if you do not have &lt;code&gt;mkpasswd&lt;&#x2F;code&gt; available.&lt;&#x2F;p&gt;
&lt;pre data-lang=&quot;sh&quot; style=&quot;background-color:#2b303b;color:#c0c5ce;&quot; class=&quot;language-sh &quot;&gt;&lt;code class=&quot;language-sh&quot; data-lang=&quot;sh&quot;&gt;&lt;span style=&quot;color:#bf616a;&quot;&gt;❯&lt;&#x2F;span&gt;&lt;span&gt; docker run&lt;&#x2F;span&gt;&lt;span style=&quot;color:#bf616a;&quot;&gt; -it --rm&lt;&#x2F;span&gt;&lt;span&gt; alpine sh&lt;&#x2F;span&gt;&lt;span style=&quot;color:#bf616a;&quot;&gt; -c &lt;&#x2F;span&gt;&lt;span&gt;&amp;#39;&lt;&#x2F;span&gt;&lt;span style=&quot;color:#a3be8c;&quot;&gt;printf &amp;quot;password&amp;quot; | mkpasswd -s -m md5&lt;&#x2F;span&gt;&lt;span&gt;&amp;#39;
&lt;&#x2F;span&gt;&lt;&#x2F;code&gt;&lt;&#x2F;pre&gt;
&lt;p&gt;&lt;a href=&quot;https:&#x2F;&#x2F;nixpkgs-manual-sphinx-markedown-example.netlify.app&#x2F;configuration&#x2F;user-mgmt.xml.html&quot;&gt;example of user management with hashed passwords&lt;&#x2F;a&gt;&lt;&#x2F;p&gt;
&lt;h3 id=&quot;bootstrapping-vm&quot;&gt;Bootstrapping VM&lt;&#x2F;h3&gt;
&lt;p&gt;Mitchell has provided a Makefile filled with some convenient ssh commands that will help configure VMs. The order of operations:&lt;&#x2F;p&gt;
&lt;ul&gt;
&lt;li&gt;Elevate to Root &amp;amp; Set Password&lt;&#x2F;li&gt;
&lt;li&gt;Check the ip with &lt;code&gt;ifconfg&lt;&#x2F;code&gt;&lt;&#x2F;li&gt;
&lt;li&gt;Run the `make vm&#x2F;Bootstrap0&lt;&#x2F;li&gt;
&lt;li&gt;Run the `make vm&#x2F;Bootstrap&lt;&#x2F;li&gt;
&lt;li&gt;Login with hashed password&lt;&#x2F;li&gt;
&lt;li&gt;Copy Secrets over to VM&lt;&#x2F;li&gt;
&lt;&#x2F;ul&gt;
&lt;h3 id=&quot;check-git-gpg-certs-clone-a-repo&quot;&gt;Check  git,  gpg certs, Clone a Repo&lt;&#x2F;h3&gt;
&lt;pre data-lang=&quot;sh&quot; style=&quot;background-color:#2b303b;color:#c0c5ce;&quot; class=&quot;language-sh &quot;&gt;&lt;code class=&quot;language-sh&quot; data-lang=&quot;sh&quot;&gt;&lt;span style=&quot;color:#bf616a;&quot;&gt;❯&lt;&#x2F;span&gt;&lt;span&gt; gpg&lt;&#x2F;span&gt;&lt;span style=&quot;color:#bf616a;&quot;&gt; --list-secret-keys --keyid-format&lt;&#x2F;span&gt;&lt;span&gt;=long
&lt;&#x2F;span&gt;&lt;span style=&quot;color:#bf616a;&quot;&gt;❯&lt;&#x2F;span&gt;&lt;span&gt; eval &amp;quot;$&lt;&#x2F;span&gt;&lt;span style=&quot;color:#a3be8c;&quot;&gt;(&lt;&#x2F;span&gt;&lt;span style=&quot;color:#bf616a;&quot;&gt;ssh-agent -s&lt;&#x2F;span&gt;&lt;span style=&quot;color:#a3be8c;&quot;&gt;)&lt;&#x2F;span&gt;&lt;span&gt;&amp;quot;
&lt;&#x2F;span&gt;&lt;span style=&quot;color:#bf616a;&quot;&gt;fish:&lt;&#x2F;span&gt;&lt;span&gt; Unsupported use of &amp;#39;&lt;&#x2F;span&gt;&lt;span style=&quot;color:#a3be8c;&quot;&gt;=&lt;&#x2F;span&gt;&lt;span&gt;&amp;#39;. In fish, please use &amp;#39;&lt;&#x2F;span&gt;&lt;span style=&quot;color:#a3be8c;&quot;&gt;set SSH_AUTH_SOCK &#x2F;tmp&#x2F;ssh-XXXXXX8h5pip&#x2F;agent.7808&lt;&#x2F;span&gt;&lt;span&gt;&amp;#39;.
&lt;&#x2F;span&gt;&lt;span&gt;
&lt;&#x2F;span&gt;&lt;span style=&quot;color:#bf616a;&quot;&gt;❯&lt;&#x2F;span&gt;&lt;span&gt; kcirtap@dev &lt;&#x2F;span&gt;&lt;span style=&quot;color:#bf616a;&quot;&gt;~&lt;&#x2F;span&gt;&lt;span&gt;&amp;gt; eval $(&lt;&#x2F;span&gt;&lt;span style=&quot;color:#bf616a;&quot;&gt;ssh-agent -c&lt;&#x2F;span&gt;&lt;span&gt;)
&lt;&#x2F;span&gt;&lt;span style=&quot;color:#bf616a;&quot;&gt;Agent&lt;&#x2F;span&gt;&lt;span&gt; pid 7863 
&lt;&#x2F;span&gt;&lt;span style=&quot;color:#bf616a;&quot;&gt;❯&lt;&#x2F;span&gt;&lt;span&gt; git clone git@github.com:kcirtapfromspace&#x2F;nixos-config.git
&lt;&#x2F;span&gt;&lt;&#x2F;code&gt;&lt;&#x2F;pre&gt;
&lt;h3 id=&quot;install-necessary-packages&quot;&gt;Install necessary packages&lt;&#x2F;h3&gt;
&lt;p&gt;Now for any project you have dedicated env which you can rip &amp;amp; replace.&lt;&#x2F;p&gt;
&lt;pre data-lang=&quot;sh&quot; style=&quot;background-color:#2b303b;color:#c0c5ce;&quot; class=&quot;language-sh &quot;&gt;&lt;code class=&quot;language-sh&quot; data-lang=&quot;sh&quot;&gt;&lt;span style=&quot;color:#bf616a;&quot;&gt;nix-env -i &lt;&#x2F;span&gt;&lt;span&gt;&amp;lt;package&amp;gt;
&lt;&#x2F;span&gt;&lt;&#x2F;code&gt;&lt;&#x2F;pre&gt;
&lt;h3 id=&quot;fun-little-things&quot;&gt;“Fun” little things&lt;&#x2F;h3&gt;
&lt;p&gt;Big fan of some of my ingrained muscle memory with mac keybindings&lt;&#x2F;p&gt;
&lt;p&gt;&lt;code&gt;Command - L&lt;&#x2F;code&gt; yeah that will lock the screen
&lt;code&gt;Command - W&lt;&#x2F;code&gt; you wanted to close the VM right, Right
&lt;code&gt;Command - C&#x2F;V&lt;&#x2F;code&gt; You’re going to want to press the CTRL button&lt;&#x2F;p&gt;
</description>
      </item>
      <item>
          <title>Why does my internet suck?</title>
          <pubDate>Thu, 11 Jan 2024 01:18:48 -0700</pubDate>
          <author>Unknown</author>
          <link>https://kcirtapfromspace.github.io/kcirtap-blog/posts/why-does-my-internet-suck/</link>
          <guid>https://kcirtapfromspace.github.io/kcirtap-blog/posts/why-does-my-internet-suck/</guid>
          <description xml:base="https://kcirtapfromspace.github.io/kcirtap-blog/posts/why-does-my-internet-suck/">&lt;h1 id=&quot;why-does-my-internet-suck&quot;&gt;Why does my Internet suck?&lt;&#x2F;h1&gt;
&lt;p&gt;Early my career, working from home was really a novelty, so I didn’t really care too much about my Internet. The TV works, I could play online matches of Halo. Life was great! Though, I do not miss the hours I spent idling in traffic, or my existential thoughts around the decades of politics that were involved for my manager to navigate and acquire an entire row of end cubes with a corner view of the flatirons.&lt;&#x2F;p&gt;
&lt;p&gt;Now I worry about how since then my work has transitioned to full-time views looking out into the world from the 1940s stucco box of bricks of which I call home. These days, existence has become more and more dependent on that little black box provided by Xfinity, for a convenient price of $15 dollars per month. It’s one after one a hodgepodge of calendar calls on a variety of vendors (FaceTime, Meet, Zoom, Hangout, Huddles or MSTeams) all with their own little annoyances. Yes, that list is rank ordered in terms of quality I experience. Considering my reality, it’s kind of wonder at the considerable effort I put into trying to ignore my internet woes. I have mainly gone by just accepting persistence quality issues as I sit on the call after call where my video drops out or how my audio sounds like DJ cutting records. I have largely just assumed that my issues stem from some combo of buggy products and Comcast being a soulless conglomerate who couldn’t care less of their users experience. I ultimately categorized these experiences with the tune of the Frank Sinatra song “That’s Life” like I do with so many other things.&lt;&#x2F;p&gt;
&lt;p&gt;While I feel that I have an unusually high tolerance for life’s little inconveniences, my partner’s tolerance is unusually thin for these things. I do hit breaking point, but it is more so related to hearing a barrage of exasperated grievances as with our terrible Wi-Fi. As of recent its been doubly triggering, because my years of perceived schlepping on the couch with my laptop isn’t actually work, she too is now alongside me on the couch schlepping remotely with her job &amp;amp; masters program. So the occasional grievance has morphed into a full-blown catastrophe. As I reflect that I now have trigger phrases like “Our Internet Sucks”, “The Wi-Fi is garbage”.&lt;&#x2F;p&gt;
&lt;p&gt;I originally started network changes small. Mainly to not pay rental fees. So, I at least ditched the rental Comcast gear. Reviewed cable modem and Wi-Fi router which had reasonable price and a healthy amount of reviews. In 2019, I picked up a &lt;a href=&quot;https:&#x2F;&#x2F;www.amazon.com&#x2F;gp&#x2F;product&#x2F;B06XZ3S6B8&#x2F;ref=ppx_yo_dt_b_search_asin_title?ie=UTF8&amp;amp;psc=1&quot;&gt;TP-Link AC2300&lt;&#x2F;a&gt; and &lt;a href=&quot;https:&#x2F;&#x2F;www.amazon.com&#x2F;gp&#x2F;product&#x2F;B06XGZBCKP&#x2F;ref=ppx_yo_dt_b_search_asin_title?ie=UTF8&amp;amp;psc=1&quot;&gt;NETGEAR CM600&lt;&#x2F;a&gt; this was the era where we had the cheapest Comcast Internet that was available, and I was blissfully ignorant. This lasted until about early 2022 where I replaced the TP-Link with the &lt;a href=&quot;https:&#x2F;&#x2F;www.amazon.com&#x2F;gp&#x2F;product&#x2F;B09DFRGYNQ&#x2F;ref=ppx_yo_dt_b_search_asin_title?ie=UTF8&amp;amp;psc=1&quot;&gt;NETGEAR R6700AXS&lt;&#x2F;a&gt;.&lt;&#x2F;p&gt;
&lt;p&gt;Around this time I also began to tinker more Raspberry Pi hobbies and in general our home devices started to grow(smart light, PiHole, AlgoTrading, &amp;amp; personal devices).  In turn bumped up the bandwidth on Xfinity and bobs your uncle, right?&lt;&#x2F;p&gt;
&lt;p&gt;Not a chance! the complaints kept rolling, kicked off looking into getting fiber, but those leads didn’t go far. So in 2023 I maxed out the Xfinity bandwidth to Gig + speeds. Well, old modem wouldn’t handle speeds over a gig and regular usage didn’t come close to those speeds. So I picked up a new modem &lt;a href=&quot;https:&#x2F;&#x2F;www.amazon.com&#x2F;gp&#x2F;product&#x2F;B08GWNZ9VF&#x2F;ref=ppx_yo_dt_b_search_asin_title?ie=UTF8&amp;amp;psc=1&quot;&gt;NETGEAR CM2000&lt;&#x2F;a&gt; to handle the additional capacity. I also like the concept that support for mesh might help deadspots.  &lt;a href=&quot;https:&#x2F;&#x2F;www.amazon.com&#x2F;gp&#x2F;product&#x2F;B0B3SQK74L&#x2F;ref=ppx_yo_dt_b_search_asin_title?ie=UTF8&amp;amp;psc=1&quot;&gt;TP-Link AXE5400&lt;&#x2F;a&gt;&lt;&#x2F;p&gt;
&lt;p&gt;The problem is my partner works at our dining table for the most part with a direct line of sight to the Wi-Fi router, which is probably only 12’ - 15’ away. So, I don’t think mesh will help her issues, but it might still solve my abysmal performance in my office. My desk in my office is probably only 15’ - 18’ feet away from the router, but there is a zigzag hallway, and a bathroom that block direct line of site. I really think stucco walls might have a metal lath that acts as a faraday cage, creating chaos for signals to reach that room.&lt;&#x2F;p&gt;
&lt;p&gt;The problem is the grievance bell is still ringing. At this point, our internet experience is like when bill &amp;amp; ted go to the future it’s all out of place. I’m streaming 4K, downloading docker images is a breeze, uploading is great, but it’s mainly the video conferencing, it’s still mainly shit.&lt;&#x2F;p&gt;
&lt;p&gt;Is it the Pi Hole? IDK its late 2023 figure i’ll drop the Pi Hole and pick up a &lt;a href=&quot;https:&#x2F;&#x2F;firewalla.com&#x2F;&quot;&gt;Firewalla&lt;&#x2F;a&gt;. It will cover all the stuff I was doing on with the Pi Hole. DNS over HTTPS with a private Cloudflare endpoint, it has ad blocking and I can configure VPNs (though i’m not sold on this unless I was hosting my own and letting my traffic exit with AWS), it’s got VLANS, and can have &lt;a href=&quot;https:&#x2F;&#x2F;homebridge.io&#x2F;&quot;&gt;HomeBridge&lt;&#x2F;a&gt; to manage all the IoT that i’ve picked up. Great! Was able to get all that running, but there still continues to be issues…&lt;&#x2F;p&gt;
&lt;p&gt;Well let’s figure out networking, all the backbone nics seems to be using 1GbE except for the Firewalla. There is some quirks and I think the agg to the Firewalla through the Wi-Fi modem is just not cutting it. Life at this point is more or less the same, except I’ve shelled out more money to have more or less the same experience that I was having back in 2019 (that’s not entirely true most internet experience is snappy. Except for Ad Heavy Sites such as Instagram (its really noticeable it’s a 50&#x2F;50 crap shoot if content will load) The funny thing is if you switch to 5G it all loads quick. At this point my tiny home network looks something like this:&lt;&#x2F;p&gt;
&lt;p&gt;{{&amp;lt; image src=“static&#x2F;sucky_network.png” caption=“Busted Home Network” alt=“Network Topology of my Home Network” width=“100%” &amp;gt;}}&lt;&#x2F;p&gt;
&lt;p&gt;Well, maybe the Ethernet devices don’t help the Wi-Fi-fi router.
Created a new plan to pick up a switch and put that into the mix. Ended up fetching a &lt;a href=&quot;https:&#x2F;&#x2F;www.qnap.com&#x2F;en&#x2F;product&#x2F;qsw-m408-4c&quot;&gt;QNAP sw-m408-4c&lt;&#x2F;a&gt; Pretty excited!! Eight 1GbE ports &amp;amp; four 10GbE. Now, I have enough ports to wire most my wired thing and hobbies. Now, I also get to negotiate true NBASE-T Ethernet 10 GbE with Mac Studio(😂 maybe one day my outbound network could absorb that).  I also was able be able to configure dual 2.5GbE agg links to the Firewalla. This should be plenty enough to handle my network traffic.&lt;&#x2F;p&gt;
&lt;p&gt;{{&amp;lt; image src=“static&#x2F;less_sucky_network.png” caption=“Less Busted Home Network” alt=“Network Topology of my Home Network” width=“100%” &amp;gt;}}&lt;&#x2F;p&gt;
&lt;!-- [[less_sucky_network.png]] --&gt;
&lt;h2 id=&quot;bufferbloat&quot;&gt;BufferBloat&lt;&#x2F;h2&gt;
&lt;p&gt;“Do your video or audio calls sometimes stutter? Does your web browsing slow down? Do video games lag?
&lt;strong&gt;If so, bufferbloat may be to blame.&lt;&#x2F;strong&gt;&lt;&#x2F;p&gt;
&lt;p&gt;What Is Bufferbloat?
Bufferbloat is a software issue with networking equipment that causes spikes in your Internet connection’s latency when a device on the network uploads or downloads files.“&lt;&#x2F;p&gt;
&lt;p&gt;https:&#x2F;&#x2F;www.bufferbloat.net&#x2F;projects&#x2F;bloat&#x2F;wiki&#x2F;Tests_for_Bufferbloat&#x2F;
https:&#x2F;&#x2F;bufferbloat-and-beyond.net&#x2F;
This sounds like what i’m experiencing.&lt;&#x2F;p&gt;
&lt;h3 id=&quot;reporting-results&quot;&gt;Reporting Results&lt;&#x2F;h3&gt;
&lt;h4 id=&quot;mac-studio&quot;&gt;Mac Studio&lt;&#x2F;h4&gt;
&lt;p&gt;With the Mac Studio, I’m able to show that the network and general bandwidth is not an issue. Even when the agg link is two 1GbE connection, speeds are decent enough to get a Good rating from Cloudflare across the board. Bumping the agg links to the 2.5GbE(Firewalla nic’s max) I’m finally able to get use the full bandwidth from my ISP.&lt;&#x2F;p&gt;
&lt;h5 id=&quot;10gbe-2gbe-firewalla-comcast-1m-gbe&quot;&gt;10GbE -&amp;gt; 2GbE(Firewalla) -&amp;gt; Comcast 1M+GbE&lt;&#x2F;h5&gt;
&lt;p&gt;&lt;a href=&quot;https:&#x2F;&#x2F;speed.cloudflare.com&#x2F;&quot;&gt;Cloudflare speed test&lt;&#x2F;a&gt;
{{&amp;lt; image src=“static&#x2F;speed_test_studio_2gbe.png” caption=“Cloud Flare Speed Test” alt=“Latency Report for my home 2GbE connection to Firewalla” width=“100%” &amp;gt;}}&lt;&#x2F;p&gt;
&lt;!-- ![[speed_test_studio_2gbe.png]] --&gt;
&lt;h5 id=&quot;10gbe-5gbe-firewalla-comcast-1-gbe&quot;&gt;10GbE -&amp;gt; 5GbE(Firewalla) -&amp;gt; Comcast 1+GbE&lt;&#x2F;h5&gt;
&lt;p&gt;{{&amp;lt; image src=“static&#x2F;speed_test_studio_5gbe.png” caption=“Cloud Flare Speed Test” alt=“Latency Report for my home using 5GbE connection to Firewalla” width=“100%” &amp;gt;}}&lt;&#x2F;p&gt;
&lt;!-- ![[speed_test_studio_5gbe.png]] --&gt;
&lt;p&gt;&lt;a href=&quot;https:&#x2F;&#x2F;www.waveform.com&#x2F;tools&#x2F;bufferbloat&quot;&gt;waveform bufferbloat test&lt;&#x2F;a&gt;
{{&amp;lt; image src=“static&#x2F;waveform_test_studio_5gbe.png” caption=“Waveform BufferBloat Test” alt=“BufferBloat Grading Repot my home using 5GbE connection to Firewalla” width=“100%” &amp;gt;}}&lt;&#x2F;p&gt;
&lt;!-- ![[waveform_test_studio_5gbe.png]] --&gt;
&lt;p&gt;&lt;a href=&quot;https:&#x2F;&#x2F;flent.org&quot;&gt;Flent&lt;&#x2F;a&gt; is a suite of tests we developed to diagnose bufferbloat and other connectivity problems. Because Flent has been tested to 40GigE, you can get a good feel for how the connection behaves while you tune your settings. In particular, Flent’s &lt;a href=&quot;https:&#x2F;&#x2F;www.bufferbloat.net&#x2F;projects&#x2F;bloat&#x2F;wiki&#x2F;RRUL_Chart_Explanation&#x2F;&quot;&gt;RRUL test&lt;&#x2F;a&gt; shows download and upload speeds and latency in one set of charts.&lt;&#x2F;p&gt;
&lt;p&gt;{{&amp;lt; image src=“static&#x2F;flent_rrul_test_5gbe.png” caption=“Flent  RRUL Test” alt=“Flent RRUL Graph of my home using 5GbE connection to Firewalla” width=“100%” &amp;gt;}}&lt;&#x2F;p&gt;
&lt;!--  ![[flent_rrul_test_5gbe.png]] --&gt;
&lt;h4 id=&quot;macbook&quot;&gt;Macbook&lt;&#x2F;h4&gt;
&lt;p&gt;With the macbook I’m 100% using Wi-Fi so I know there’s going to be some extra overhead over ethernet&lt;&#x2F;p&gt;
&lt;h5 id=&quot;macbook-pro-wifi-5g-1gbe-nic-2gbe-firewalla-comcast-1-gbe&quot;&gt;Macbook Pro Wifi (5G) -&amp;gt; 1Gbe (Nic) -&amp;gt; 2GbE(Firewalla) -&amp;gt; Comcast 1+GbE&lt;&#x2F;h5&gt;
&lt;p&gt;&lt;a href=&quot;https:&#x2F;&#x2F;www.waveform.com&#x2F;tools&#x2F;bufferbloat&quot;&gt;waveform bufferbloat test&lt;&#x2F;a&gt;
{{&amp;lt; image src=“static&#x2F;waveform_test_macbook_wifi.png” caption=“Waveform BufferBloat Test” alt=“BufferBloat Grading Repot my home using wifi” width=“100%” &amp;gt;}}&lt;&#x2F;p&gt;
&lt;!-- ![[waveform_test_macbook_wifi.png]] --&gt;
&lt;p&gt;{{&amp;lt; image src=“static&#x2F;speed_test_studio_5gbe.png” caption=“Cloud Flare Speed Test” alt=“Latency Report for my home using WiFi” width=“100%” &amp;gt;}}
&lt;a href=&quot;https:&#x2F;&#x2F;speed.cloudflare.com&#x2F;&quot;&gt;Cloudflare speed test&lt;&#x2F;a&gt;&lt;&#x2F;p&gt;
&lt;!-- ![[speed_test_macbook_wifi_1.png]] --&gt;
&lt;h4 id=&quot;the-only-consistency-on-wifi-is-how-inconsistent-it-is&quot;&gt;The only consistency on WiFi is how inconsistent it is&lt;&#x2F;h4&gt;
&lt;p&gt;I ended up running back to back speed test test sitting on my couch while watching youtube through my TV(hardwired ethernet connection) &amp;amp; running the test from my macbook on the Wi-Fi. Again i’m only sitting about 6’ away from the router without any obstructions. This really shows the issue under a normal network load. I picked up that my Wi-Fi experience only worsens with moderate usage.&lt;&#x2F;p&gt;
&lt;p&gt;{{&amp;lt; image src=“static&#x2F;speed_test_macbook_wifi_2.png” caption=“Cloud Flare Speed Test” alt=“Latency Report for my home using WiFi” width=“100%” &amp;gt;}}&lt;&#x2F;p&gt;
&lt;!-- ![[speed_test_macbook_wifi_2.png]] --&gt;
&lt;p&gt;{{&amp;lt; image src=“static&#x2F;speed_test_macbook_wifi_3.png” caption=“Cloud Flare Speed Test” alt=“Latency Report for my home using WiFi” width=“100%” &amp;gt;}}&lt;&#x2F;p&gt;
&lt;!-- ![[speed_test_macbook_wifi_3.png]] --&gt;
&lt;p&gt;I continue to get warned about how bad my Wi-Fi is even google warns how bad my situation is.&lt;&#x2F;p&gt;
&lt;p&gt;{{&amp;lt; image src=“static&#x2F;google_meet_terrible.png” caption=“Google Meet Showing My Connection Issues” alt=“Google Meet Latency Report for my home using WiFi” width=“100%” &amp;gt;}}&lt;&#x2F;p&gt;
&lt;!-- ![[speed_test_macbook_wifi_3.png]] --&gt;
&lt;h2 id=&quot;next-steps&quot;&gt;Next Steps&lt;&#x2F;h2&gt;
&lt;h3 id=&quot;smart-queue-with-firewalla&quot;&gt;Smart Queue with Firewalla&lt;&#x2F;h3&gt;
&lt;p&gt;I probably need to spend some time and energy to setup Smart Queue Features with the Firewalla. They recently release Cake&lt;&#x2F;p&gt;
&lt;p&gt;Dave Täht, co-founder of the Bufferbloat Project, commented: “The FQ-Codel (RFC8290) and the newer CAKE packet scheduling&#x2F;AQM algorithms are nearly universal on servers and clients. They give the “little guy” – the small packets, the first packets in a new connection to anywhere – a boost until the flow achieves parity with other flows from other sources. DNS, gaming traffic, VoIP, videoconferencing, any new flow, to anywhere, get a small boost. That’s it. After that, &lt;em&gt;all&lt;&#x2F;em&gt; network traffic gets treated equally. Big flows – from Netflix, Google, Comcast, your Mom, or to Timbuktu – all achieve &lt;em&gt;parity&lt;&#x2F;em&gt;, with minimal delay and buffering, at the worldwide variety of real round trip times.“ &lt;a href=&quot;https:&#x2F;&#x2F;www.prnewswire.com&#x2F;news-releases&#x2F;libreqoe-releases-version-1-3-of-their-isp-quality-of-experience-framework-301697634.html&quot;&gt;1&lt;&#x2F;a&gt;&lt;&#x2F;p&gt;
&lt;p&gt;Firewalla has smart queue and with their latest release they have the option to use CAKE&lt;&#x2F;p&gt;
&lt;pre style=&quot;background-color:#2b303b;color:#c0c5ce;&quot;&gt;&lt;code&gt;&lt;span&gt;CAKE (Common Applications Kept Enhanced) is a shaping-capable
&lt;&#x2F;span&gt;&lt;span&gt;       queue discipline which uses both AQM and FQ.
&lt;&#x2F;span&gt;&lt;&#x2F;code&gt;&lt;&#x2F;pre&gt;
&lt;p&gt;https:&#x2F;&#x2F;help.firewalla.com&#x2F;hc&#x2F;en-us&#x2F;articles&#x2F;360056976594-Firewalla-Feature-Smart-Queue#h_01H2TTZZ5B16YWQPWHDX8S5H7V&lt;&#x2F;p&gt;
&lt;h3 id=&quot;network-segmentation&quot;&gt;Network Segmentation&lt;&#x2F;h3&gt;
&lt;p&gt;Separate devices on to their own VLAN segments, Block IoT traffic to the internet&lt;&#x2F;p&gt;
&lt;h3 id=&quot;anything-else-i-can-think-of&quot;&gt;Anything Else I can think of&lt;&#x2F;h3&gt;
&lt;p&gt;Next steps could be to roll out any of the recommendations that BufferBloat article has on Wi-Fi hardware. Guess I pick something with &lt;a href=&quot;https:&#x2F;&#x2F;openwrt.org&#x2F;&quot;&gt;openwrt&lt;&#x2F;a&gt; Really makes me think, if Ubiquity products also encounter these issues. I’ll also need to read more up on https:&#x2F;&#x2F;wiki.stoplagging.com&#x2F; as things like SQM only help to the ~350Mb connection&lt;&#x2F;p&gt;
</description>
      </item>
      <item>
          <title>Tips &amp; Tricks of the trade: video compression with ffmpeg</title>
          <pubDate>Thu, 26 Oct 2023 02:31:11 -0600</pubDate>
          <author>Unknown</author>
          <link>https://kcirtapfromspace.github.io/kcirtap-blog/posts/tips-ffmpeg/</link>
          <guid>https://kcirtapfromspace.github.io/kcirtap-blog/posts/tips-ffmpeg/</guid>
          <description xml:base="https://kcirtapfromspace.github.io/kcirtap-blog/posts/tips-ffmpeg/">&lt;h1 id=&quot;tips-tricks-of-the-trade-video-compression-with-ffmpeg&quot;&gt;Tips &amp;amp; Tricks of the Trade: video compression with ffmpeg&lt;&#x2F;h1&gt;
&lt;p&gt;When you need to share a recording that isn’t ginormous I’m converting a video to GIF file with &lt;code&gt;ffmpeg&lt;&#x2F;code&gt;:&lt;&#x2F;p&gt;
&lt;pre data-lang=&quot;sh&quot; style=&quot;background-color:#2b303b;color:#c0c5ce;&quot; class=&quot;language-sh &quot;&gt;&lt;code class=&quot;language-sh&quot; data-lang=&quot;sh&quot;&gt;&lt;span style=&quot;color:#bf616a;&quot;&gt;ffmpeg &lt;&#x2F;span&gt;&lt;span&gt;\
&lt;&#x2F;span&gt;&lt;span style=&quot;color:#bf616a;&quot;&gt;  -i&lt;&#x2F;span&gt;&lt;span&gt; IMG_1288.MOV \
&lt;&#x2F;span&gt;&lt;span style=&quot;color:#bf616a;&quot;&gt;  -ss&lt;&#x2F;span&gt;&lt;span&gt; 00:00:00.000 \
&lt;&#x2F;span&gt;&lt;span style=&quot;color:#bf616a;&quot;&gt;  -pix_fmt&lt;&#x2F;span&gt;&lt;span&gt; rgb24 \
&lt;&#x2F;span&gt;&lt;span style=&quot;color:#bf616a;&quot;&gt;  -r&lt;&#x2F;span&gt;&lt;span&gt; 10 \
&lt;&#x2F;span&gt;&lt;span style=&quot;color:#bf616a;&quot;&gt;  -s&lt;&#x2F;span&gt;&lt;span&gt; 320x240 \
&lt;&#x2F;span&gt;&lt;span style=&quot;color:#bf616a;&quot;&gt;  -t&lt;&#x2F;span&gt;&lt;span&gt; 00:00:10.000 \
&lt;&#x2F;span&gt;&lt;span&gt;  IMG_1288.gif
&lt;&#x2F;span&gt;&lt;&#x2F;code&gt;&lt;&#x2F;pre&gt;
&lt;p&gt;#tips
#video_compression
#work&lt;&#x2F;p&gt;
</description>
      </item>
      <item>
          <title>Building analytic NLP workflows with CI&#x2F;CD, containers, &amp; python on cloud.gov</title>
          <pubDate>Mon, 09 Oct 2023 18:39:10 -0600</pubDate>
          <author>Unknown</author>
          <link>https://kcirtapfromspace.github.io/kcirtap-blog/posts/cloudfoundry-ephemeral-containers/</link>
          <guid>https://kcirtapfromspace.github.io/kcirtap-blog/posts/cloudfoundry-ephemeral-containers/</guid>
          <description xml:base="https://kcirtapfromspace.github.io/kcirtap-blog/posts/cloudfoundry-ephemeral-containers/">&lt;h1 id=&quot;building-analytic-nlp-workflows-with-ci-cd-containers-python-on-cloud-gov&quot;&gt;Building analytic NLP workflows with CI&#x2F;CD, containers, &amp;amp; python on cloud.gov&lt;&#x2F;h1&gt;
&lt;p&gt;I began this journey with familiar footing, providing reliable methods to enable data scientist to deliver insights on institutional data. The core fundamentals are not much different from delivering any other software product. There is a defined sandbox of system constraints on which shape the design, building, and operation of the overall system. We start with the fundamentals and build out with layers.  To begin with we’re making sure the basics are followed like getting code and decisions documented in version control, making sure you and those are also operating with the same primitives to iterate and deliver value. From there we continue to layer on with understanding the prompt deliver against to explore is relatively simple; Explore how natural language processing can be used to evaluate data and identify duplicate values. Our data scientist whipped up a solution in a day or two and we’re done right? Not really, as underneath the initial experiment is the question of how to build a clear process or framework for the team to continuously deliver and compound our insights capabilities on the existing product.&lt;&#x2F;p&gt;
&lt;p&gt;Most of this work is net new, so there is some flexibility in how to pursue a solution. When I’m working on with greenfield technology products, I generally choose tooling that can be replicated locally and in the cloud. The current paradigm that I’m fond of is to pick a comfortable software language, bundle that up into a container image, apply some sort of version tags to that image, orchestrate getting that image to some sort of compute environment, and you’re good to go. From there, Developers can use the container image build locally, know what version is running in their various environments. From here, I explore the thought process and constraints in greater detail.&lt;&#x2F;p&gt;
&lt;p&gt;Check out the &lt;a href=&quot;https:&#x2F;&#x2F;github.com&#x2F;kcirtapfromspace&#x2F;cloudfoundry_circleci&quot;&gt;demo source code&lt;&#x2F;a&gt;&lt;&#x2F;p&gt;
&lt;h2 id=&quot;the-constraints&quot;&gt;The Constraints&lt;&#x2F;h2&gt;
&lt;p&gt;Just to reiterate what the to work through Build a greenfield mechanism to use NLP for processing entries with python scripts on cloud.gov infrastructure &amp;amp; the CI&#x2F;CD tooling uses CircleCI.&lt;&#x2F;p&gt;
&lt;h3 id=&quot;infrastructure&quot;&gt;Infrastructure&lt;&#x2F;h3&gt;
&lt;h4 id=&quot;cloud-gov&quot;&gt;&lt;a href=&quot;https:&#x2F;&#x2F;cloud.gov&#x2F;&quot;&gt;Cloud.gov&lt;&#x2F;a&gt;&lt;&#x2F;h4&gt;
&lt;p&gt;Let’s look into cloud.gov, why would teams build on this platform vs AWS gov cloud? It has two major benefits from the start yes first they handle all the infrastructure management, second they’re FedRAMP authorized. Teams inherently need less resources to build, deliver and manage the application lifecycle&lt;&#x2F;p&gt;
&lt;h5 id=&quot;compliant-from-the-start&quot;&gt;&lt;a href=&quot;https:&#x2F;&#x2F;cloud.gov&#x2F;#compliant-from-the-start&quot;&gt;Compliant from the start&lt;&#x2F;a&gt;&lt;&#x2F;h5&gt;
&lt;p&gt;Cloud.gov offers a fast way for federal agencies to host and update websites, APIs, and other applications. Employees and contractors can focus on developing mission-critical applications, leaving server infrastructure management to us.&lt;&#x2F;p&gt;
&lt;h5 id=&quot;fedramp-authorized&quot;&gt;&lt;a href=&quot;https:&#x2F;&#x2F;cloud.gov&#x2F;#fedramp-authorized&quot;&gt;FedRAMP authorized&lt;&#x2F;a&gt;&lt;&#x2F;h5&gt;
&lt;p&gt;Cloud.gov has a FedRAMP Joint Authorization Board (JAB) authorization, which means it complies with federal security requirements. When you build a system on cloud.gov, you leverage this compliance and reduce the amount of work you need to do.&lt;&#x2F;p&gt;
&lt;p&gt;Cloud.gov operates as a Platform as a Service (PaaS) abstraction on top of AWS, it enables developers to deliver applications efficiently without needing to worry about too much finagling with infrastructure. The good news is you don’t have to worry about most of the things SREs or Ops people build careers on worrying about such VPCs, subnets, ingress, egress, route table, firewalls, DNS, ec2, or ECS, EKS, access controls or whatnots.  Cloud.gov of managed this by building their platform using Cloud Foundry.&lt;&#x2F;p&gt;
&lt;h4 id=&quot;cloud-foundry&quot;&gt;&lt;a href=&quot;https:&#x2F;&#x2F;www.cloudfoundry.org&#x2F;&quot;&gt;Cloud Foundry&lt;&#x2F;a&gt;&lt;&#x2F;h4&gt;
&lt;p&gt;Cloud Foundry has a CLI tool that enables developers to provision resources within an organization. There is also a Terraform provider that extends this functionality to have declarative Infrastructure as Code (IaC). A developer can easily add these to their CI&#x2F;CD pipeline to automate the building, deploy, and maintenance processes of an application lifecycle hosted in cloud.gov. Operating with a proxy between the developer cloud.gov &amp;amp; AWS there are some quirks to navigate.&lt;&#x2F;p&gt;
&lt;h5 id=&quot;limitations&quot;&gt;Limitations&lt;&#x2F;h5&gt;
&lt;p&gt;Some limitations with operating within a PaaS you kind of get what you get. For example, I built out a demo app using the &lt;a href=&quot;https:&#x2F;&#x2F;developers.sap.com&#x2F;tutorials&#x2F;hcp-create-trial-account.html&quot;&gt;SAP BTP trial platform&lt;&#x2F;a&gt; which also happens to operate on fundamentals provided by Cloud Foundry. SAP provides a free 90-day trial account, which was a great way to test some ideas. (By the way, incredible! No limits except for quotas, just build and use.)&lt;&#x2F;p&gt;
&lt;p&gt;There is usually a vendor provided library&#x2F;marketplace of available &lt;a href=&quot;https:&#x2F;&#x2F;cloud.gov&#x2F;docs&#x2F;services&#x2F;intro&#x2F;&quot;&gt;service brokers&lt;&#x2F;a&gt; for use. For the SAP Trials account, there were 46 available for use, with the vast majority tailored as integrations into the SAP platform. For my needs, I use a more general purpose broker for Postgres there. With a couple of button clicks or a string of commands to the CLI, a fully managed Postgres database is deployed. An application can “bind” to a service to enable direct connectivity.&lt;&#x2F;p&gt;
&lt;p&gt;With a deployed application, logs are proxie’d through the service platform. This can be plagued with vague errors or terminology, but for the most part a developer can fetch the necessary logs for the deployed application, for service brokers not so much. Building any application with complex dependencies on service brokers could put you in a tricky spot trying to debug any problems not directly caused by the deployed application.&lt;&#x2F;p&gt;
&lt;p&gt;Cloud Foundry runs on the linux&#x2F;amd64 x86 architecture.  Building across platforms can be a cause for some headaches. I run a M1 Mac that uses darwin&#x2F;arm64 which has produced numerous conflicts on compatibility when building things. For the past year or so I’ve run into some hiccup to architecture mismatches on some dependency, package, causing an unnecessary detour. The main thing that gets jammed up is that Cloud Foundry (the underlying open source project behind cloud.gov) uses &lt;a href=&quot;https:&#x2F;&#x2F;cloud.gov&#x2F;docs&#x2F;getting-started&#x2F;concepts&#x2F;#buildpacks&quot;&gt;buildpacks&lt;&#x2F;a&gt; in order to deploy applications.&lt;&#x2F;p&gt;
&lt;p&gt;I actually find builders and builders to be fascinating, I believe they will be the way forward, eliminating any need to meticulously craft a multi-stage container images. Which is exactly how I first approached tackling this demo.&lt;&#x2F;p&gt;
&lt;h4 id=&quot;open-container-initiative-oci&quot;&gt;Open Container Initiative (OCI)&lt;&#x2F;h4&gt;
&lt;p&gt;Open Container Initiative is a standard that was set in place by CNCF to ensure all container images follow the same protocol, this way, they can be used everywhere. I’m a big fan of using containers to encapsulate code and run in some sort of compute environment (pick your preference). After getting the NLP python script from the resident data scientist, my first step would be to bundle the code into a container. This is usually my first step when jumping into this greenfield initiative.&lt;&#x2F;p&gt;
&lt;p&gt;When it comes to building a container image the most familiar pattern is to build a Dockerfile there’s a ton of documentation out there, but in general use multi-stage builds to shrink the final product that will run. Also, if you can eliminate any means for shell access, do that too. Specific to building a python app check out  &lt;a href=&quot;https:&#x2F;&#x2F;pythonspeed.com&#x2F;docker&#x2F;&quot;&gt;pythonspeed.com&lt;&#x2F;a&gt; the guy knows a thing or two about containers and python and he crushes on guidance for building a custom python container image. I’ve somewhat arrived at my own personal preference for building images, largely influenced by his guidance.&lt;br &#x2F;&gt;
It boils down to using a base that updates and installs core dependencies for building the app, build the app, and then copy all the goodies to a final build image that is locked down from shell&#x2F; root access.&lt;&#x2F;p&gt;
&lt;p&gt;It generally looks something like this:&lt;&#x2F;p&gt;
&lt;pre data-lang=&quot;dockerfile&quot; style=&quot;background-color:#2b303b;color:#c0c5ce;&quot; class=&quot;language-dockerfile &quot;&gt;&lt;code class=&quot;language-dockerfile&quot; data-lang=&quot;dockerfile&quot;&gt;&lt;span style=&quot;color:#b48ead;&quot;&gt;from&lt;&#x2F;span&gt;&lt;span&gt; python:latest &lt;&#x2F;span&gt;&lt;span style=&quot;color:#b48ead;&quot;&gt;as &lt;&#x2F;span&gt;&lt;span style=&quot;color:#bf616a;&quot;&gt;build 
&lt;&#x2F;span&gt;&lt;span style=&quot;color:#b48ead;&quot;&gt;run &lt;&#x2F;span&gt;&lt;span&gt;apt-get install so many things
&lt;&#x2F;span&gt;&lt;span style=&quot;color:#b48ead;&quot;&gt;run &lt;&#x2F;span&gt;&lt;span&gt;pip install -u pip wheel
&lt;&#x2F;span&gt;&lt;span style=&quot;color:#65737e;&quot;&gt;#create a nice foundation like install wheels upgrade pip
&lt;&#x2F;span&gt;&lt;span&gt;
&lt;&#x2F;span&gt;&lt;span&gt;from build as package_installer
&lt;&#x2F;span&gt;&lt;span style=&quot;color:#b48ead;&quot;&gt;run &lt;&#x2F;span&gt;&lt;span&gt;pip insatll all the things &amp;amp; delete all the cruft, maybe use a venv just so you know what you actually need 
&lt;&#x2F;span&gt;&lt;span&gt;
&lt;&#x2F;span&gt;&lt;span&gt;from distroless:python as final 
&lt;&#x2F;span&gt;&lt;span style=&quot;color:#b48ead;&quot;&gt;copy&lt;&#x2F;span&gt;&lt;span&gt; --from build &#x2F;venv &#x2F;opt&#x2F;venv
&lt;&#x2F;span&gt;&lt;span&gt;entrypoint [python app.py]
&lt;&#x2F;span&gt;&lt;&#x2F;code&gt;&lt;&#x2F;pre&gt;
&lt;p&gt;[Inspect your container with Dive][dive]&lt;&#x2F;p&gt;
&lt;h4 id=&quot;buildpacks&quot;&gt;Buildpacks&lt;&#x2F;h4&gt;
&lt;p&gt;&lt;a href=&quot;https:&#x2F;&#x2F;dev.to&#x2F;thenjdevopsguy&#x2F;cloud-native-buildpacks-vs-dockerfiles-for-building-container-images-55m5&quot;&gt;This article provides a nice distinction between a Dockerfile and Buildpacks&lt;&#x2F;a&gt;&lt;&#x2F;p&gt;
&lt;p&gt;TLDR; buildpacks provide an opinionated abstraction to build a container from sourcecode.&lt;&#x2F;p&gt;
&lt;p&gt;Instead of needing to meticulously handcraft the masterpiece that is your thousand line Dockerfile. You can pick a container builder like [paketo][paketo] leverage their python build pack to produce an equivalent image, what you were expecting with using distroless. Paketo advertises “Just bring your app and Paketo Buildpacks will detect what language your app is using, gather the required dependencies, and build it into an image.” There’s always some edge cases where something falls through the cracks (for me, it was trying to build PyTorch with CPU on amd64 architecture)&lt;&#x2F;p&gt;
&lt;p&gt;Cloud.gov also maintains and provided buildpacks, I can’t really speak towards the effiencey of provided buildpacks, but anything managed services or tooling we can leverage, reduces the complexity and overhead needed for long term support or maintenance.&lt;&#x2F;p&gt;
&lt;h4 id=&quot;ci-cd&quot;&gt;CI&#x2F;CD&lt;&#x2F;h4&gt;
&lt;h5 id=&quot;circleci&quot;&gt;CircleCI&lt;&#x2F;h5&gt;
&lt;ul&gt;
&lt;li&gt;Connecting repo&lt;&#x2F;li&gt;
&lt;li&gt;env secrets&lt;&#x2F;li&gt;
&lt;li&gt;git bot account&lt;&#x2F;li&gt;
&lt;li&gt;sbom&lt;&#x2F;li&gt;
&lt;li&gt;artifacts&lt;&#x2F;li&gt;
&lt;li&gt;publish to cloudfoundry&lt;&#x2F;li&gt;
&lt;&#x2F;ul&gt;
&lt;h2 id=&quot;first-contact&quot;&gt;First contact&lt;&#x2F;h2&gt;
&lt;p&gt;My initial thought process was crank out a container image, use Cloud Foundry run-task command (which support docker images) and Bob’s your uncle right, clap our hands and were done. Of course, there would also need to be Static Code Analysis and container image scanning and potentially container runtime analysis to help alleviate the most concerns presented by any auditors. This could be paired with a Software Bill of Materials (SBOM) that generated on demand or with releases. Containers and Code can easily be version controlled with applying tags to your favorite container registry. This didn’t necessarily need to be a robust service (the current process handled entirely on a laptop) so I figured that the CI&#x2F;CD or the native CRUD app could run the job on-demand or some cron. But of course there’s that saying no battle plan survives first contact…&lt;&#x2F;p&gt;
&lt;p&gt;After building the NLP container image, I mocked up an app to generate and populate fake data into Postgres database. This randomly generated a string, inserted into a table with a unique ID, and associated a unique user to that entry. Doing this allowed me to evaluate and see the NLP in action, identifying any duplicate records that that were created. Sweet! This design also more or less worked on the first or second try. Next, I tried to tie that into with CircleCI and deploy to Cloud Foundry. I immediately encountered disk quota issues. The container image being build by CircleCI was ~10G, at a loss as to why I dove in to see what the discrepency is. Of course CPU architecture is to blame Arm64 vs amd64, my local container image wound up somewhere between ~2G &amp;amp; 700Mb depending on how aggressive I got on cleaning up resources. Inspecting the image layers with [dive][dive] I was able to point fingers at biggest culprit for the added extra resources (Nvidia cuda for GPU processing) that are installed for amd64 versions of PyTorch. I now need to update the custom image to source the CPU version of PyTorch. Not an inherently difficult obstacle to overcome, patch my Dockerfile and carry on. But for some reason I continued to encounter various issues when deploying the docker image and determined I needed to understand in more depth as to what cloud.gov is doing with that docker image. It turns out it rebuilds the image with buildpacks.&lt;&#x2F;p&gt;
&lt;p&gt;Enter buildpack and a what is not a proverbial frustration and fascination with them.&lt;&#x2F;p&gt;
&lt;p&gt;{{&amp;lt; admonition type=note title=“When you deploy a docker image(OCI compliant) to cloud.gov it does not in fact deploy a docker image” open=false &amp;gt;}}&lt;&#x2F;p&gt;
&lt;blockquote&gt;
&lt;h3 id=&quot;runtime-differences&quot;&gt;Runtime differences&lt;a href=&quot;https:&#x2F;&#x2F;cloud.gov&#x2F;docs&#x2F;deployment&#x2F;docker&#x2F;#runtime-differences&quot;&gt;&lt;&#x2F;a&gt;&lt;&#x2F;h3&gt;
&lt;&#x2F;blockquote&gt;
&lt;p&gt;Pushing an application using a Docker image creates the same type of container in the same runtime as using a buildpack does. When you supply a Docker image for your application, Cloud Foundry&lt;&#x2F;p&gt;
&lt;blockquote&gt;
&lt;ol&gt;
&lt;li&gt;fetches the Docker image&lt;&#x2F;li&gt;
&lt;li&gt;uses the image layers to construct a base filesystem&lt;&#x2F;li&gt;
&lt;li&gt;uses the image metadata to determine the command to run, environment vars, user id, and port to expose (if any)&lt;&#x2F;li&gt;
&lt;li&gt;creates an app specification based on the steps above&lt;&#x2F;li&gt;
&lt;li&gt;passes the app specification on to diego (the multi-host container management system) to be run as a linux container.&lt;&#x2F;li&gt;
&lt;&#x2F;ol&gt;
&lt;&#x2F;blockquote&gt;
&lt;p&gt;No Docker components are involved in this process - your applications are run under the &lt;code&gt;garden-runc&lt;&#x2F;code&gt; runtime (versus &lt;code&gt;containerd&lt;&#x2F;code&gt; in Docker). Both &lt;code&gt;garden-runc&lt;&#x2F;code&gt; and &lt;code&gt;containerd&lt;&#x2F;code&gt; are layers built on top of the Open Container Initiative’s &lt;code&gt;runc&lt;&#x2F;code&gt; package. They have significant overlap in the types of problems they solve and in many of the ways they try to solve them. For example, both &lt;code&gt;garden-runc&lt;&#x2F;code&gt; and &lt;code&gt;containerd&lt;&#x2F;code&gt;:&lt;&#x2F;p&gt;
&lt;blockquote&gt;
&lt;ul&gt;
&lt;li&gt;use cgroups to limit resource usage&lt;&#x2F;li&gt;
&lt;li&gt;use process namespaces to isolate processes&lt;&#x2F;li&gt;
&lt;li&gt;combine image layers into a single root filesystem&lt;&#x2F;li&gt;
&lt;li&gt;use user namespaces to prevent users with escalated privileges in containers from gaining escalated privileges on hosts (this is an available option on &lt;code&gt;containerd&lt;&#x2F;code&gt; and is a default on &lt;code&gt;garden-runc&lt;&#x2F;code&gt;)&lt;&#x2F;li&gt;
&lt;&#x2F;ul&gt;
&lt;&#x2F;blockquote&gt;
&lt;p&gt;Additionally, since containers are running in Cloud Foundry, most or all of the other components of the Docker ecosystem are are replaced with Cloud Foundry components, such as service discovery, process monitoring, virtual networking, routing, volumes, etc. This means most Docker-specific guidance, checklists, etc., will not be directly applicable for applications within Cloud Foundry, regardless of whether they’re pushed as Docker images or buildpack applications.
{{&amp;lt; &#x2F;admonition &amp;gt;}}&lt;&#x2F;p&gt;
&lt;h3 id=&quot;using-the-buldkit&quot;&gt;Using the buldkit&lt;&#x2F;h3&gt;
&lt;p&gt;The last statement in the documentation is what really caught me off guard. This creates the common scenario of works on my machine, logic that many teams can get into with drift between devices. I felt that this would break the any idempotency of delivering bundled in a container image and there was additional overhead of monitoring and validating container images, that is not present for the existing app that is using buildpacks. I began thinking it might be easier to deliver a web app with the cloud.gov build pack as thats what its designed to provide. Not having worked with buildpack I looked into what was needed. turns out not much. Really just need the code, a requirements.txt file and and entrypoint to the application (Procfile). I manually deployed this version of the application with the cloud.gov cli and ran into the same issues as before.&lt;&#x2F;p&gt;
&lt;p&gt;This is where I went down a couple rabbit holes until settling on a solution. The first was evaluating if I could correctly install PyTorch with poetry(thats a no) vs using the more common package manager pip. It turn out the way PyTorch manages their packages is just at odds. Poetry struggles with the nested explicit version that exclude the GPU packages and pip does the same with there being mismatched dependencies beween what gets installed and what needs to be used.&lt;&#x2F;p&gt;
&lt;h3 id=&quot;pytorch-vs-spacy&quot;&gt;PyTorch vs spaCy&lt;&#x2F;h3&gt;
&lt;p&gt;This led me to look for alternatives to the PyTorch sentience-transformer. I came across spaCY which is a lightweight language model that can replace the functionality of what was originally being used in the NLP model. Once that was swapped I was able to successfully deploy the model well under the 2 GB container image I was building.&lt;&#x2F;p&gt;
&lt;h3 id=&quot;building-a-flask-demo&quot;&gt;Building a flask demo&lt;&#x2F;h3&gt;
&lt;p&gt;I opted to show mock out a demo for this analysis so the team can evaluate usage. I began with a lightweight flask app that was easy to get started. I connected to the database and generated data. From the mock data I was able to query a column and output all the duplicates. Slapping on a super simple UI to add some templates, buttons, and some frontend javascript and of course set the background as NASA’s picture of the day. Run it from local and now there is a working demo of the data scientist NLP model assessing duplicates. How can this be useful beyond a demo? Make it an API&lt;&#x2F;p&gt;
&lt;h4 id=&quot;flask-app&quot;&gt;flask app&lt;&#x2F;h4&gt;
&lt;p&gt;lightweight
simple to work locally&lt;&#x2F;p&gt;
&lt;h4 id=&quot;db-connection&quot;&gt;db connection&lt;&#x2F;h4&gt;
&lt;p&gt;create a reusable component to manage db connection&lt;&#x2F;p&gt;
&lt;h4 id=&quot;sample-data&quot;&gt;sample data&lt;&#x2F;h4&gt;
&lt;p&gt;control the outcomes&lt;&#x2F;p&gt;
&lt;h4 id=&quot;slap-on-simple-ui&quot;&gt;slap on simple UI&lt;&#x2F;h4&gt;
&lt;p&gt;add some templates, buttons, and some frontend javascript
nasa picture of the day&lt;&#x2F;p&gt;
&lt;h4 id=&quot;working-demo&quot;&gt;working demo&lt;&#x2F;h4&gt;
&lt;p&gt;gunicorn delivers&lt;&#x2F;p&gt;
&lt;h3 id=&quot;refactoring-flask-demo-to-fastapi&quot;&gt;Refactoring flask demo to fastAPI&lt;&#x2F;h3&gt;
&lt;p&gt;Drop the flask app and insert fastAPI, this required changing all the resposes to json. But you get the added benefit of getting a self documenting api page( same as what is seen in swagger&#x2F;openapi).&lt;&#x2F;p&gt;
&lt;h4 id=&quot;add-auth-mechanism&quot;&gt;add auth mechanism&lt;&#x2F;h4&gt;
&lt;h5 id=&quot;the-hell-that-is-a-frontend-auth-token-broker&quot;&gt;the hell that is a frontend auth token broker&lt;&#x2F;h5&gt;
&lt;p&gt;oauth2_scheme = OAuth2PasswordBearer(tokenUrl=“token”)
HTTPBasic()
encryption
csrf_token  frontend login vs api barer token&lt;&#x2F;p&gt;
&lt;h4 id=&quot;tagged-based-routes-or-at-least-that-how-i-think-it-works&quot;&gt;tagged based routes(or at least that how I think it works)&lt;&#x2F;h4&gt;
&lt;p&gt;seperation of auth routes from common routes. segmentation just sounded right in my brain&lt;&#x2F;p&gt;
&lt;h4 id=&quot;structure-things-as-modules&quot;&gt;structure things as modules&lt;&#x2F;h4&gt;
&lt;h4 id=&quot;should-have-started-with-tests-but-now-we-ve-got-em&quot;&gt;should have started with tests, but now we’ve got em&lt;&#x2F;h4&gt;
&lt;h4 id=&quot;middleware&quot;&gt;middleware&lt;&#x2F;h4&gt;
&lt;p&gt;httpsredirect
csrf_token&lt;&#x2F;p&gt;
&lt;h5 id=&quot;database-things&quot;&gt;Database things&lt;&#x2F;h5&gt;
&lt;p&gt;Embracing first principles of agile work doesn’t need to be perfect, but get it out. Deliver the MVP first before having conversations about performance durability, and whatnot. Adding any greenfield services to existing products, needs to be eased into. There’s no need to rush, optimizing for a cashing queue until after first contact with the users.&lt;&#x2F;p&gt;
&lt;h5 id=&quot;going-to-prod&quot;&gt;Going to Prod&lt;&#x2F;h5&gt;
&lt;p&gt;Taking an application to production requires a lot of careful planning and testing. The application needs to be stable, secure, and able to handle the expected load. Here are top level considerations you should at least run through and think about:&lt;&#x2F;p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;Code Quality&lt;&#x2F;strong&gt;&lt;&#x2F;li&gt;
&lt;li&gt;&lt;strong&gt;Performance&lt;&#x2F;strong&gt;&lt;&#x2F;li&gt;
&lt;li&gt;&lt;strong&gt;Security&lt;&#x2F;strong&gt;&lt;&#x2F;li&gt;
&lt;li&gt;&lt;strong&gt;Scalability&lt;&#x2F;strong&gt;&lt;&#x2F;li&gt;
&lt;li&gt;&lt;strong&gt;Monitoring &amp;amp; Logging&lt;&#x2F;strong&gt;&lt;&#x2F;li&gt;
&lt;li&gt;&lt;strong&gt;Disaster Recovery &amp;amp; Backup Strategy&lt;&#x2F;strong&gt;&lt;&#x2F;li&gt;
&lt;&#x2F;ul&gt;
&lt;p&gt;Remember that deploying to production is not the end of the development process but rather a new phase where you’ll need to monitor the system closely, gather user feedback, fix bugs, and continuously improve based on user needs and business goals.&lt;&#x2F;p&gt;
&lt;p&gt;Getting a demo
[cloud.gov buildpacks]: https:&#x2F;&#x2F;cloud.gov&#x2F;docs&#x2F;getting-started&#x2F;concepts&#x2F;#buildpacks
[cloudfoundry-community-buildpacks]: https:&#x2F;&#x2F;github.com&#x2F;cloudfoundry-community&#x2F;cf-docs-contrib&#x2F;wiki&#x2F;Buildpacks
[dive]: https:&#x2F;&#x2F;github.com&#x2F;wagoodman&#x2F;dive
[paketo]: https:&#x2F;&#x2F;paketo.io&#x2F;&lt;&#x2F;p&gt;
&lt;p&gt;#blog #nlp #cloud_gov&lt;&#x2F;p&gt;
</description>
      </item>
      <item>
          <title>Tips &amp; Tricks: SSH Passthrough</title>
          <pubDate>Sat, 01 Jul 2023 00:00:00 -0600</pubDate>
          <author>Unknown</author>
          <link>https://kcirtapfromspace.github.io/kcirtap-blog/posts/tips-ssh-passthrough/</link>
          <guid>https://kcirtapfromspace.github.io/kcirtap-blog/posts/tips-ssh-passthrough/</guid>
          <description xml:base="https://kcirtapfromspace.github.io/kcirtap-blog/posts/tips-ssh-passthrough/">&lt;h1 id=&quot;ssh-passthrough&quot;&gt;SSH Passthrough&lt;&#x2F;h1&gt;
&lt;p&gt;Quick Reference Need to connect to a private host behind a jump host? Here are a couple of quick approaches:&lt;&#x2F;p&gt;
&lt;h2 id=&quot;1-use-the-j-proxyjump-option&quot;&gt;1. Use the &lt;code&gt;-J&lt;&#x2F;code&gt; (ProxyJump) Option&lt;&#x2F;h2&gt;
&lt;pre data-lang=&quot;sh&quot; style=&quot;background-color:#2b303b;color:#c0c5ce;&quot; class=&quot;language-sh &quot;&gt;&lt;code class=&quot;language-sh&quot; data-lang=&quot;sh&quot;&gt;&lt;span style=&quot;color:#bf616a;&quot;&gt;ssh -J&lt;&#x2F;span&gt;&lt;span&gt; user@jump-host user@internal-host
&lt;&#x2F;span&gt;&lt;span&gt;
&lt;&#x2F;span&gt;&lt;&#x2F;code&gt;&lt;&#x2F;pre&gt;
&lt;p&gt;This passes your SSH session through jump-host to reach internal-host without extra config files.
2. Utilize SSH Agent Forwarding&lt;&#x2F;p&gt;
&lt;p&gt;ssh -A user@jump-host&lt;&#x2F;p&gt;
&lt;h1 id=&quot;now-from-jump-host&quot;&gt;Now from jump-host:&lt;&#x2F;h1&gt;
&lt;p&gt;ssh user@internal-host&lt;&#x2F;p&gt;
&lt;p&gt;With -A, your SSH keys stay local, but are “forwarded” through jump-host to authenticate on internal-host.&lt;&#x2F;p&gt;
</description>
      </item>
      <item>
          <title>dbt for Data Teams</title>
          <pubDate>Thu, 01 Jun 2023 00:00:00 -0600</pubDate>
          <author>Unknown</author>
          <link>https://kcirtapfromspace.github.io/kcirtap-blog/posts/dbt-for-data-teams/</link>
          <guid>https://kcirtapfromspace.github.io/kcirtap-blog/posts/dbt-for-data-teams/</guid>
          <description xml:base="https://kcirtapfromspace.github.io/kcirtap-blog/posts/dbt-for-data-teams/">&lt;h1 id=&quot;executive-summary&quot;&gt;Executive Summary&lt;&#x2F;h1&gt;
&lt;p&gt;The fast-paced, data-centric business environment necessitates agile, reliable, and effective data analytics solutions. This document provides a comprehensive overview advocating for the implementation of Data Build Tool (dbt™) as a strategic asset in your data stack, particularly when used in conjunction with AWS services like Redshift, Glue, and S3, and integrated into CI&#x2F;CD pipelines through platforms like Github, Azure DevOps, or others.&lt;&#x2F;p&gt;
&lt;p&gt;dbt empowers data teams to create production-ready data pipelines with ease, while adhering to software engineering best practices. It serves as a catalyst in eliminating data silos, ensuring data integrity, fostering collaboration, and streamlining the deployment of data products. dbt’s compatibility with AWS services amplifies its utility. When used with a data warehouse like Amazon Redshift, dbt optimizes data transformations and analytics. In combination with AWS Glue, it provides a streamlined, end-to-end ETL process. Its flexibility with AWS S3 offers additional storage options, making it a versatile tool in a data architect’s toolkit. While the technology does present a learning curve and may have limitations for extremely large datasets, the benefits significantly outweigh the challenges. The document argues that adopting dbt is not just a technological decision but a business imperative for organizations striving to be data-driven in today’s competitive landscape. By integrating dbt into your data architecture, organizations can achieve faster, more reliable, and cost-effective data analytics, thereby accelerating their data capabilities.&lt;&#x2F;p&gt;
&lt;h1 id=&quot;introduction&quot;&gt;Introduction&lt;&#x2F;h1&gt;
&lt;p&gt;While the company dbt Labs, Inc. does have multiple offerings through their SaaS product, for all purposes within in this document we are referencing the open-source tooling dbt Core, a binary file that can be installed on user or machine systems(via software package management tools like pip or homebrew). dbt Core ships with a command-line interface (CLI) for running your dbt project. The dbt CLI is free to use and available as an open source project(&lt;a href=&quot;https:&#x2F;&#x2F;github.com&#x2F;dbt-labs&#x2F;dbt-core&quot;&gt;https:&#x2F;&#x2F;github.com&#x2F;dbt-labs&#x2F;dbt-core&lt;&#x2F;a&gt;).&lt;&#x2F;p&gt;
&lt;p&gt;Intended as a tool to help data engineers, data analysts, and data architects, this document advocates for the implementation of Data Build Tool (dbt™) alongside AWS services like Redshift and Glue. Implementing dbt can lead to the creation of agile, reliable, and effective data products. The modern business landscape requires organizations to move fast, make data-driven decisions, and be adaptive. Below is a framework that sets forth reasons why dbt should be a part of your data stack to ship trusted data products faster. &lt;&#x2F;p&gt;
&lt;h2 id=&quot;what-is-dbttm&quot;&gt;What is dbt™?&lt;&#x2F;h2&gt;
&lt;p&gt;&lt;a href=&quot;https:&#x2F;&#x2F;www.getdbt.com&#x2F;product&#x2F;what-is-dbt&quot;&gt;https:&#x2F;&#x2F;www.getdbt.com&#x2F;product&#x2F;what-is-dbt&lt;&#x2F;a&gt; &lt;&#x2F;p&gt;
&lt;p&gt;dbt is a SQL-first transformation workflow that lets teams quickly and collaboratively deploy analytics code following software engineering best practices like modularity, portability, CI&#x2F;CD, and documentation. Enabling anyone on the data team to safely contribute to production-grade data pipelines. &lt;&#x2F;p&gt;
&lt;p&gt;Analysts using dbt can transform their data by simply writing select statements, while dbt handles turning these statements into tables and views in a data warehouse. These select statements, or “models”, form a dbt project. Models frequently build on top of one another – dbt makes it easy to manage relationships between models, and visualize these relationships, as well as assure the quality of your transformations through testing.&lt;&#x2F;p&gt;
&lt;p&gt;Engineers and Analysts can deploy safely to dev environments testing out their assumptions and completing their assertions on the data. Git-enabled version control enables collaboration and a means to return to previous states, but also fosters an ecosystem in which data team members can easily get their code peer reviewed.  Within the CI&#x2F;CD pipeline teams are able to test every model prior to production, and share dynamically generated documentation with all data stakeholders. Teams can write modular data transformations in .sql or .py files – dbt handles the chore of dependency management that can easily be reused. &lt;&#x2F;p&gt;
&lt;h2 id=&quot;what-dbttm-isn-t&quot;&gt;What dbt™ isn’t?&lt;&#x2F;h2&gt;
&lt;p&gt;dbt is not a data warehouse or a database itself, but rather a tool that can be used in conjunction with a data warehouse to make it easier to work with and manage data. Additionally, dbt is not a programming language, but it does use a programming-like syntax to specify how data should be transformed and loaded into a data warehouse. It is also not a visualization tool, although it can be used in conjunction with visualization tools like Tableau or Looker to help users understand and analyze their data&lt;&#x2F;p&gt;
&lt;h2 id=&quot;why-dbttm&quot;&gt;Why dbt™?&lt;&#x2F;h2&gt;
&lt;h3 id=&quot;eliminate-data-silos&quot;&gt;Eliminate Data Silos&lt;&#x2F;h3&gt;
&lt;p&gt;Now your data teams can build models that connect with those built by the analytics team, each using the language they prefer. dbt supports modeling in SQL or Python, enabling a shared workspace for everyone that works on analytic code. &lt;&#x2F;p&gt;
&lt;h3 id=&quot;reliability-and-repeatability&quot;&gt;Reliability and Repeatability&lt;&#x2F;h3&gt;
&lt;p&gt;dbt’s “version control integration” means that all changes to data models are tracked. This is crucial in ensuring that past versions can be quickly restored, guaranteeing data integrity. With dbt data teams can build observability into transformation workflows with in-app scheduling, logging, and alerting. Protection policies on branches ensure data moves through governed processes including dev, stage, and prod environments generated by every CI run.&lt;&#x2F;p&gt;
&lt;h3 id=&quot;self-documenting-code&quot;&gt;Self-documenting Code&lt;&#x2F;h3&gt;
&lt;p&gt;In dbt, metadata about data models can be embedded directly into the code. For example, you can include annotations within SQL files, which dbt will use to auto-generate documentation. This visibility is beneficial for cross-functional teams to understand data models. &lt;&#x2F;p&gt;
&lt;h3 id=&quot;collaboration&quot;&gt;Collaboration&lt;&#x2F;h3&gt;
&lt;p&gt;dbt’s integration with Git for version control fosters collaboration. Team members can work on parallel branches and easily merge changes, encouraging a more team-based approach. &lt;&#x2F;p&gt;
&lt;h3 id=&quot;code-reviews&quot;&gt;Code Reviews&lt;&#x2F;h3&gt;
&lt;p&gt;For data engineering projects, code reviews ensure that the data transformations adhere to predefined quality standards, making the data more trustworthy. &lt;&#x2F;p&gt;
&lt;h3 id=&quot;extensive-community-ecosystem&quot;&gt;Extensive Community Ecosystem&lt;&#x2F;h3&gt;
&lt;p&gt;Through dbt Hub(&lt;a href=&quot;https:&#x2F;&#x2F;hub.getdbt.com&#x2F;&quot;&gt;https:&#x2F;&#x2F;hub.getdbt.com&#x2F;&lt;&#x2F;a&gt;) teams building with dbt can leverage community packages(from contributors like dbt Labs, Fivetran, Ay to refine the raw data in your warehouse. These community-available packages provide SQL macros that can be (re)used across dbt projects, from useful macros for performing data audits, to base models for Redshift System tables, or sophisticated privacy transformations that reduce reidentification risk but leave data usable for analysts. &lt;&#x2F;p&gt;
&lt;h2 id=&quot;limitations&quot;&gt;Limitations&lt;&#x2F;h2&gt;
&lt;p&gt;While dbt offers powerful features, it has a learning curve and requires a cultural shift toward treating data as code. Furthermore, for very large datasets, dbt’s in-database processing may limit performance.&lt;&#x2F;p&gt;
&lt;h2 id=&quot;how-dbttm-fits-with-ci-cd&quot;&gt;How dbt™ Fits with CI&#x2F;CD&lt;&#x2F;h2&gt;
&lt;p&gt;Continuous Integration&#x2F;Continuous Deployment (CI&#x2F;CD) is a best practice in modern software engineering that involves automatically building, testing, and deploying code changes to production environments. In the context of data analytics, Data Build Tool (dbt) is an equivalent for data transformation, testing, and documentation. When dbt is integrated into a CI&#x2F;CD pipeline, it enables automated data testing, version control, and deployment, thereby offering a comprehensive solution for agile data analytics. Azure DevOps serves as a robust platform to implement this integration, providing a variety of tools for code repository management, build and release pipelines, and more. Here’s how dbt fits seamlessly with CI&#x2F;CD, particularly when using Azure DevOps.&lt;&#x2F;p&gt;
&lt;h3 id=&quot;automated-testing&quot;&gt;Automated Testing&lt;&#x2F;h3&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;Pre-Commit Hooks: Before a developer pushes code to the repository, pre-commit hooks can run dbt tests locally to ensure data models are correct.&lt;&#x2F;p&gt;
&lt;&#x2F;li&gt;
&lt;li&gt;
&lt;p&gt;Pipeline Steps: As part of the CI&#x2F;CD pipeline in Azure DevOps, automated tests can be configured to run every time there’s a code change. This ensures that the data models and transformations are correct before they are deployed.&lt;&#x2F;p&gt;
&lt;&#x2F;li&gt;
&lt;li&gt;
&lt;p&gt;Quality Gates: dbt tests can act as quality gates in the pipeline. If a test fails, the pipeline can be configured to halt, preventing the deployment of faulty data models.&lt;&#x2F;p&gt;
&lt;&#x2F;li&gt;
&lt;&#x2F;ul&gt;
&lt;h3 id=&quot;version-control&quot;&gt;Version Control&lt;&#x2F;h3&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;dbt Project in Azure Repos: Your dbt project can be stored in Azure Repos, which allows it to benefit from version control, branching, and pull requests, just like any other codebase.&lt;&#x2F;p&gt;
&lt;&#x2F;li&gt;
&lt;li&gt;
&lt;p&gt;Branch Policies: Azure DevOps allows you to set up branch policies, ensuring that dbt models can only be merged after passing specified criteria, like code reviews and successful test runs.&lt;&#x2F;p&gt;
&lt;&#x2F;li&gt;
&lt;li&gt;
&lt;p&gt;Artifact Management: The built and tested dbt artifacts can be stored in Azure Artifacts, providing a historical record and facilitating rollbacks if necessary.&lt;&#x2F;p&gt;
&lt;&#x2F;li&gt;
&lt;&#x2F;ul&gt;
&lt;h3 id=&quot;deployment-automation&quot;&gt;Deployment Automation&lt;&#x2F;h3&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;Parameterized Runs: Azure DevOps pipelines can be parameterized to deploy dbt models to various environments (e.g., dev, staging, prod) based on the branch being merged.&lt;&#x2F;p&gt;
&lt;&#x2F;li&gt;
&lt;li&gt;
&lt;p&gt;Automated Rollbacks: If a dbt model fails in a higher environment like staging or production, Azure DevOps pipelines can be configured to automatically rollback to the previous stable version.&lt;&#x2F;p&gt;
&lt;&#x2F;li&gt;
&lt;li&gt;
&lt;p&gt;Monitoring and Notifications: Azure DevOps provides monitoring tools and can be set to send notifications if the pipeline fails or if there are issues with the deployed dbt models.&lt;&#x2F;p&gt;
&lt;&#x2F;li&gt;
&lt;&#x2F;ul&gt;
&lt;h3 id=&quot;collaboration-and-access-control&quot;&gt;Collaboration and Access Control&lt;&#x2F;h3&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;Role-Based Access: Azure DevOps supports role-based access control, allowing you to specify who can perform actions like triggering pipelines or merging dbt models.&lt;&#x2F;p&gt;
&lt;&#x2F;li&gt;
&lt;li&gt;
&lt;p&gt;Collaboration Features: Azure Boards can be used for task tracking, linking your data tasks directly to your dbt development efforts.&lt;&#x2F;p&gt;
&lt;&#x2F;li&gt;
&lt;li&gt;
&lt;p&gt;Documentation: dbt’s inherent documentation generation can be integrated into the CI&#x2F;CD process, keeping all stakeholders informed about the current state of the data models.&lt;&#x2F;p&gt;
&lt;&#x2F;li&gt;
&lt;&#x2F;ul&gt;
&lt;p&gt;dbt and CI&#x2F;CD together form an effective methodology for agile data transformation and analytics. By leveraging Azure DevOps as the platform for implementing this integration, organizations can automate data quality checks, maintain version control, and streamline the deployment process, resulting in faster, more reliable data analytics pipelines.&lt;&#x2F;p&gt;
&lt;h2 id=&quot;complementing-aws-services&quot;&gt;Complementing AWS Services&lt;&#x2F;h2&gt;
&lt;h3 id=&quot;redshift-dbt-and-redshift-can-optimize-data-transformations-and-analytics&quot;&gt;Redshift: dbt and Redshift can optimize data transformations and analytics.&lt;&#x2F;h3&gt;
&lt;p&gt;Amazon Redshift is a fully managed data warehouse service, optimized for running complex queries and performing data analytics tasks. One of its key features is the ability to create materialized views, which pre-compute and store query results for faster retrieval. dbt can take advantage of Redshift’s features like materialized views to optimize data transformations. By utilizing these features, dbt models can be designed to run much faster, which is especially useful when dealing with large data sets or complex transformations. This can lead to reduced compute costs and faster time-to-insight for data analytics. Redshift uses a PostgreSQL-compatible SQL syntax, which is also the primary language for defining transformations in dbt. This compatibility means that you can seamlessly move SQL code between dbt and Redshift, thereby speeding up the development process. dbt’s built-in version control capabilities enable team members to collaborate on a single, unified model that leverages Redshift’s power. This ensures that changes are transparent, trackable, and reversible, adding a layer of reliability to your data processes.&lt;&#x2F;p&gt;
&lt;h3 id=&quot;glue-glue-handles-data-ingestion-while-dbttm-can-focus-on-the-transformation-layer&quot;&gt;Glue: Glue handles data ingestion, while dbt™ can focus on the transformation layer.&lt;&#x2F;h3&gt;
&lt;p&gt;AWS Glue is a fully managed extract, transform, and load (ETL) service that makes it easy for users to prepare and load data for analytics. AWS Glue can focus on data ingestion and schema management, preparing the raw data for subsequent transformations. Once the data is ingested into Redshift or another data warehouse, dbt can then take over to perform the transformations, tests, and data quality checks. This separation of concerns creates a modular data pipeline that is easier to manage, monitor, and scale. Glue’s Data Catalog feature acts as a centralized repository for metadata, allowing seamless transition into dbt. This improves discovery and enhances governance, as dbt can also document transformations, tests, and lineage. Glue and dbt complement each other by creating a more streamlined, end-to-end ETL process. Glue handles the initial steps, while dbt optimizes the data transformation layer, resulting in a more agile and efficient workflow.&lt;&#x2F;p&gt;
&lt;h3 id=&quot;s3-dbttm-can-also-read-from-and-write-to-aws-s3-buckets-making-it-versatile-in-handling-various-data-storage-solutions&quot;&gt;S3: dbt™ can also read from and write to AWS S3 buckets, making it versatile in handling various data storage solutions.&lt;&#x2F;h3&gt;
&lt;p&gt;Amazon S3 (Simple Storage Service) is a widely used object storage service, suitable for storing large volumes of unstructured data. dbt’s capability to read from and write to S3 buckets adds an extra layer of versatility. This means you’re not confined to using a data warehouse for storage; you can also use S3 for raw or transformed data, giving you more options in how you architect your data solutions. Many organizations use S3 as a data lake to store raw data. dbt’s compatibility with S3 allows it to directly read this raw data, transform it, and either load it into a data warehouse like Redshift for analytical workloads or write it back to S3 in a more structured format. S3 provides a cost-effective storage solution, especially for long-term data retention. By using dbt to write transformation outputs back to S3, you can optimize costs for storage while maintaining high data quality and accessibility.&lt;&#x2F;p&gt;
&lt;h2 id=&quot;conclusion&quot;&gt;Conclusion&lt;&#x2F;h2&gt;
&lt;p&gt;In the fast-paced, data-centric business environment of today, the need for agile, reliable, and effective data analytics solutions cannot be overstated. The dbt emerges as a game-changer in this context, offering a robust framework that fosters collaboration, enhances data integrity, and expedites the deployment of data products. Its SQL-first transformation workflow, Git-enabled version control, and built-in CI&#x2F;CD capabilities make it an indispensable tool for data professionals including data engineers, analysts, and architects.&lt;&#x2F;p&gt;
&lt;p&gt;dbt’s compatibility with AWS services further amplifies its efficacy. When paired with Amazon Redshift, dbt leverages advanced data warehousing capabilities to optimize transformations and analytics. Its synergy with AWS Glue simplifies the ETL process, creating a streamlined workflow from data ingestion to transformation. The ability to interact with AWS S3 offers additional storage flexibility, extending its utility across different storage paradigms.&lt;&#x2F;p&gt;
&lt;p&gt;Despite its learning curve and potential limitations for handling extremely large datasets, the advantages of implementing dbt, especially in conjunction with AWS services, far outweigh the challenges. With its focus on eliminating data silos, enforcing data reliability, enabling self-documenting code, and encouraging collaboration, dbt acts as a catalyst in the evolution of modern data stacks. When integrated into a CI&#x2F;CD pipeline using platforms like Azure DevOps, it adds another layer of automation and governance, making your data operations not just agile but also resilient.&lt;&#x2F;p&gt;
&lt;p&gt;In summary, dbt should not be seen merely as an option but rather as a strategic asset for any organization serious about accelerating its data capabilities. Its power to transform data engineering practices, coupled with its seamless fit within the broader AWS ecosystem, makes it a compelling choice for modernizing your data stack. Adopting dbt is not just a technological decision; it’s a business imperative for organizations aiming to be data-driven in a competitive landscape.&lt;&#x2F;p&gt;
&lt;h1 id=&quot;appendix&quot;&gt;Appendix&lt;&#x2F;h1&gt;
&lt;h2 id=&quot;use-cases-articles&quot;&gt;Use cases&#x2F; Articles&lt;&#x2F;h2&gt;
&lt;h3 id=&quot;build-your-data-pipeline-in-your-aws-modern-data-platform-using-aws-lake-formation-aws-glue-and-dbt-core&quot;&gt;Build your data pipeline in your AWS modern data platform using AWS Lake Formation, AWS Glue, and dbt Core&lt;&#x2F;h3&gt;
&lt;p&gt;&lt;a href=&quot;https:&#x2F;&#x2F;www.getdbt.com&#x2F;&quot;&gt;dbt&lt;&#x2F;a&gt; has established itself as one of the most popular tools in the modern data stack, and is aiming to bring analytics engineering to everyone. The dbt tool makes it easy to develop and implement complex data processing pipelines, with mostly SQL, and it provides developers with a simple interface to create, test, document, evolve, and deploy their workflows. For more information, see &lt;a href=&quot;https:&#x2F;&#x2F;docs.getdbt.com&#x2F;&quot;&gt;docs.getdbt.com&lt;&#x2F;a&gt;.&lt;&#x2F;p&gt;
&lt;p&gt;dbt primarily targets cloud data warehouses such as &lt;a href=&quot;http:&#x2F;&#x2F;aws.amazon.com&#x2F;redshift&quot;&gt;Amazon Redshift&lt;&#x2F;a&gt; or Snowflake. Now, you can use dbt against AWS data lakes, thanks to the following two services:&lt;&#x2F;p&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;&lt;a href=&quot;https:&#x2F;&#x2F;docs.aws.amazon.com&#x2F;glue&#x2F;latest&#x2F;dg&#x2F;interactive-sessions-overview.html&quot;&gt;AWS Glue Interactive Sessions&lt;&#x2F;a&gt;, a serverless Apache Spark runtime environment managed by &lt;a href=&quot;https:&#x2F;&#x2F;aws.amazon.com&#x2F;glue&#x2F;&quot;&gt;AWS Glue&lt;&#x2F;a&gt; with on-demand access and a 1-minute billing minimum&lt;&#x2F;p&gt;
&lt;&#x2F;li&gt;
&lt;li&gt;
&lt;p&gt;&lt;a href=&quot;https:&#x2F;&#x2F;aws.amazon.com&#x2F;lake-formation&#x2F;&quot;&gt;AWS Lake Formation&lt;&#x2F;a&gt;, a service that makes it easy to quickly set up a secure data lake&lt;&#x2F;p&gt;
&lt;&#x2F;li&gt;
&lt;&#x2F;ul&gt;
&lt;p&gt;In this post, you’ll learn how to deploy a data pipeline in your modern data platform using the dbt-glue adapter built by the AWS Professional Services team in collaboration with dbtlabs.&lt;&#x2F;p&gt;
&lt;p&gt;With this new open-source, battle-tested dbt AWS Glue adapter, developers can now use dbt for their data lakes, paying for just the compute they need, with no need to shuffle data around. They still have access to everything that makes dbt great, including the local developer experience, documentation, tests, incremental data processing, Git integration, CI&#x2F;CD, and more.&lt;&#x2F;p&gt;
&lt;p&gt;&lt;a href=&quot;https:&#x2F;&#x2F;aws.amazon.com&#x2F;blogs&#x2F;big-data&#x2F;build-your-data-pipeline-in-your-aws-modern-data-platform-using-aws-lake-formation-aws-glue-and-dbt-core&#x2F;&quot;&gt;https:&#x2F;&#x2F;aws.amazon.com&#x2F;blogs&#x2F;big-data&#x2F;build-your-data-pipeline-in-your-aws-modern-data-platform-using-aws-lake-formation-aws-glue-and-dbt-core&#x2F;&lt;&#x2F;a&gt; &lt;&#x2F;p&gt;
&lt;h3 id=&quot;how-safetyculture-scales-unpredictable-dbt-cloud-workloads-in-a-cost-effective-manner-with-amazon-redshift&quot;&gt;How SafetyCulture scales unpredictable dbt Cloud workloads in a cost-effective manner with Amazon Redshift&lt;&#x2F;h3&gt;
&lt;p&gt;SafetyCulture runs an Amazon Redshift provisioned cluster to support unpredictable and predictable workloads. A source of unpredictable workloads is dbt Cloud, which SafetyCulture uses to manage data transformations in the form of models. Whenever models are created or modified, a dbt Cloud CI job is triggered to test the models by materializing the models in Amazon Redshift. To balance the needs of unpredictable and predictable workloads, SafetyCulture used Amazon Redshift workload management (WLM) to flexibly manage workload priorities.&lt;&#x2F;p&gt;
&lt;p&gt;With plans for further growth in dbt Cloud workloads, SafetyCulture needed a solution that does the following:&lt;&#x2F;p&gt;
&lt;p&gt;Caters for unpredictable workloads in a cost-effective manner&lt;&#x2F;p&gt;
&lt;p&gt;Separates unpredictable workloads from predictable workloads to scale compute resources independently&lt;&#x2F;p&gt;
&lt;p&gt;Continues to allow models to be created and modified based on production data&lt;&#x2F;p&gt;
&lt;p&gt;&lt;a href=&quot;https:&#x2F;&#x2F;aws.amazon.com&#x2F;blogs&#x2F;big-data&#x2F;how-safetyculture-scales-unpredictable-dbt-cloud-workloads-in-a-cost-effective-manner-with-amazon-redshift&#x2F;&quot;&gt;https:&#x2F;&#x2F;aws.amazon.com&#x2F;blogs&#x2F;big-data&#x2F;how-safetyculture-scales-unpredictable-dbt-cloud-workloads-in-a-cost-effective-manner-with-amazon-redshift&#x2F;&lt;&#x2F;a&gt; &lt;&#x2F;p&gt;
&lt;h3 id=&quot;manage-data-transformations-with-dbt-in-amazon-redshift&quot;&gt;Manage data transformations with dbt in Amazon Redshift&lt;&#x2F;h3&gt;
&lt;p&gt;In this post, we demonstrate some features in dbt that help you manage data transformations in Amazon Redshift. We also provide the dbt CLI and Amazon Redshift workshop to get started using these features.&lt;&#x2F;p&gt;
&lt;p&gt;&lt;a href=&quot;https:&#x2F;&#x2F;aws.amazon.com&#x2F;blogs&#x2F;big-data&#x2F;manage-data-transformations-with-dbt-in-amazon-redshift&#x2F;&quot;&gt;https:&#x2F;&#x2F;aws.amazon.com&#x2F;blogs&#x2F;big-data&#x2F;manage-data-transformations-with-dbt-in-amazon-redshift&#x2F;&lt;&#x2F;a&gt; &lt;&#x2F;p&gt;
&lt;h3 id=&quot;automating-deployment-of-amazon-redshift-etl-jobs-with-aws-codebuild-aws-batch-and-dbttm&quot;&gt;Automating deployment of Amazon Redshift ETL jobs with AWS CodeBuild, AWS Batch, and dbt™&lt;&#x2F;h3&gt;
&lt;p&gt;In this post, we show you how to automate the deployment of &lt;a href=&quot;http:&#x2F;&#x2F;aws.amazon.com&#x2F;redshift&quot;&gt;Amazon Redshift&lt;&#x2F;a&gt; ETL jobs using &lt;a href=&quot;http:&#x2F;&#x2F;aws.amazon.com&#x2F;batch&quot;&gt;AWS Batch&lt;&#x2F;a&gt; and &lt;a href=&quot;http:&#x2F;&#x2F;aws.amazon.com&#x2F;codebuild&quot;&gt;AWS CodeBuild&lt;&#x2F;a&gt;. AWS Batch allows you to run your data transformation jobs without having to install and manage batch computing software or server clusters. CodeBuild is a fully managed continuous integration service that builds your data transformation project into a Docker image run in AWS Batch. This deployment automation can help you shorten the time to value. These two services are also fully managed and incur fees only when run, which optimizes costs.&lt;&#x2F;p&gt;
&lt;p&gt;We also introduce a third-party tool for the ETL jobs: &lt;a href=&quot;https:&#x2F;&#x2F;blog.getdbt.com&#x2F;what--exactly--is-dbt-&#x2F;&quot;&gt;dbt™&lt;&#x2F;a&gt;, which enables data analysts and engineers to write data transformation queries in a modular manner without having to maintain the execution order manually. It compiles all code into raw SQL queries that run against your Amazon Redshift cluster to use existing computing resources. It also understands dependencies within your queries and runs them in the correct order. dbt™ code is a combination of SQL and Jinja (a templating language); therefore, you can express logic such as if statements, loops, filters, and macros in your queries. For more information, see &lt;a href=&quot;https:&#x2F;&#x2F;docs.getdbt.com&#x2F;docs&#x2F;introduction&#x2F;&quot;&gt;dbt™ Documentation&lt;&#x2F;a&gt;.&lt;&#x2F;p&gt;
&lt;p&gt;&lt;a href=&quot;https:&#x2F;&#x2F;aws.amazon.com&#x2F;blogs&#x2F;big-data&#x2F;automating-deployment-of-amazon-redshift-etl-jobs-with-aws-codebuild-aws-batch-and-dbt&#x2F;&quot;&gt;https:&#x2F;&#x2F;aws.amazon.com&#x2F;blogs&#x2F;big-data&#x2F;automating-deployment-of-amazon-redshift-etl-jobs-with-aws-codebuild-aws-batch-and-dbt&#x2F;&lt;&#x2F;a&gt;&lt;&#x2F;p&gt;
&lt;h3 id=&quot;accelerating-data-teams-with-dbt-cloud-snowflake&quot;&gt;Accelerating Data Teams with dbt Cloud &amp;amp; Snowflake&lt;&#x2F;h3&gt;
&lt;p&gt;Modern businesses need modern data strategies, built on platforms that support agility, growth and operational efficiency.&lt;&#x2F;p&gt;
&lt;p&gt;Snowflake is the Data Cloud, a future-proof solution that simplifies data pipelines, so you can focus on data and analytics instead of infrastructure management.&lt;&#x2F;p&gt;
&lt;p&gt;dbt is a transformation workflow that lets teams quickly and collaboratively deploy analytics code following software engineering best practices like modularity, portability, CI&#x2F;CD, and documentation. Now anyone who knows SQL can build production-grade data pipelines. It transforms data in the warehouse, leveraging cloud data platforms like Snowflake.&lt;&#x2F;p&gt;
&lt;p&gt;In this Quickstart, you will follow a step-by-step guide to using dbt with Snowflake, and see some of the benefits this tandem brings.
&lt;a href=&quot;https:&#x2F;&#x2F;quickstarts.snowflake.com&#x2F;guide&#x2F;data_teams_with_dbt_cloud&#x2F;#0&quot;&gt;https:&#x2F;&#x2F;quickstarts.snowflake.com&#x2F;guide&#x2F;data_teams_with_dbt_cloud&#x2F;#0&lt;&#x2F;a&gt; &lt;&#x2F;p&gt;
&lt;h3 id=&quot;dbt-data-build-tool-overview-what-is-dbt-and-what-can-it-do-for-my-data-pipeline&quot;&gt;dbt (Data Build Tool) Overview: What is dbt and What Can It Do for My Data Pipeline?&lt;&#x2F;h3&gt;
&lt;p&gt;There are many tools on the market to help your organization transform data and make it accessible for business users. One that we recommend and use often—dbt (data build tool) —focuses solely on making the process of transforming data simpler and faster. In this blog we will discuss what dbt is, how it can transform the way your organization curates its data for decision making, and how you can get started with using dbt (data build tool).&lt;&#x2F;p&gt;
&lt;p&gt;&lt;a href=&quot;https:&#x2F;&#x2F;www.analytics8.com&#x2F;blog&#x2F;dbt-overview-what-is-dbt-and-what-can-it-do-for-my-data-pipeline&#x2F;&quot;&gt;https:&#x2F;&#x2F;www.analytics8.com&#x2F;blog&#x2F;dbt-overview-what-is-dbt-and-what-can-it-do-for-my-data-pipeline&#x2F;&lt;&#x2F;a&gt;&lt;&#x2F;p&gt;
&lt;h3 id=&quot;activating-ownership-with-data-contracts-in-dbt&quot;&gt;Activating ownership with data contracts in dbt&lt;&#x2F;h3&gt;
&lt;p&gt;Data mesh, data contracts and shifting ownership to data producers has been presented as the solution. Data teams have largely brought into this promise but only a fraction can confidently say that they’ve seen the expected impact from their efforts. One of the reasons for this is that contracts introduce yet another technical and organizational burden for already stretched engineers.&lt;&#x2F;p&gt;
&lt;p&gt;With dbt 1.5 and the support for data contracts, data teams have the opportunity to roll out contracts themselves. While it doesn’t solve the full problem it provides a way to enforce quality at the intersection of teams that can be rolled out in days instead of months.&lt;&#x2F;p&gt;
&lt;p&gt;&lt;a href=&quot;https:&#x2F;&#x2F;medium.com&#x2F;@mikldd&#x2F;activating-ownership-with-data-contracts-in-dbt-4f2de41c4657&quot;&gt;https:&#x2F;&#x2F;medium.com&#x2F;@mikldd&#x2F;activating-ownership-with-data-contracts-in-dbt-4f2de41c4657&lt;&#x2F;a&gt;&lt;&#x2F;p&gt;
&lt;p&gt;#blog&lt;&#x2F;p&gt;
</description>
      </item>
      <item>
          <title>Tips &amp; Tricks: Super Delete S3 Objects</title>
          <pubDate>Thu, 01 Jun 2023 00:00:00 -0600</pubDate>
          <author>Unknown</author>
          <link>https://kcirtapfromspace.github.io/kcirtap-blog/posts/tips-super-delete-s3-objects/</link>
          <guid>https://kcirtapfromspace.github.io/kcirtap-blog/posts/tips-super-delete-s3-objects/</guid>
          <description xml:base="https://kcirtapfromspace.github.io/kcirtap-blog/posts/tips-super-delete-s3-objects/">&lt;p&gt;When you need to delete 10s of 1000’s of s3 objects&lt;&#x2F;p&gt;
&lt;pre data-lang=&quot;sh&quot; style=&quot;background-color:#2b303b;color:#c0c5ce;&quot; class=&quot;language-sh &quot;&gt;&lt;code class=&quot;language-sh&quot; data-lang=&quot;sh&quot;&gt;&lt;span style=&quot;color:#b48ead;&quot;&gt;export &lt;&#x2F;span&gt;&lt;span style=&quot;color:#bf616a;&quot;&gt;BUCKET&lt;&#x2F;span&gt;&lt;span&gt;=&lt;&#x2F;span&gt;&lt;span style=&quot;color:#a3be8c;&quot;&gt;ecs-dbt-dev-staging-bucket
&lt;&#x2F;span&gt;&lt;span style=&quot;color:#b48ead;&quot;&gt;export &lt;&#x2F;span&gt;&lt;span style=&quot;color:#bf616a;&quot;&gt;PREFIX&lt;&#x2F;span&gt;&lt;span&gt;=&amp;quot;&lt;&#x2F;span&gt;&lt;span style=&quot;color:#a3be8c;&quot;&gt;Connecticut&#x2F;EmplId-Empl&lt;&#x2F;span&gt;&lt;span&gt;&amp;quot;
&lt;&#x2F;span&gt;&lt;span style=&quot;color:#bf616a;&quot;&gt;aws&lt;&#x2F;span&gt;&lt;span&gt; s3api list-objects-v2&lt;&#x2F;span&gt;&lt;span style=&quot;color:#bf616a;&quot;&gt; --bucket &lt;&#x2F;span&gt;&lt;span&gt;$&lt;&#x2F;span&gt;&lt;span style=&quot;color:#bf616a;&quot;&gt;BUCKET --prefix &lt;&#x2F;span&gt;&lt;span&gt;$&lt;&#x2F;span&gt;&lt;span style=&quot;color:#bf616a;&quot;&gt;PREFIX --output&lt;&#x2F;span&gt;&lt;span&gt; text&lt;&#x2F;span&gt;&lt;span style=&quot;color:#bf616a;&quot;&gt; --query &lt;&#x2F;span&gt;&lt;span&gt;\
&lt;&#x2F;span&gt;&lt;span&gt;&amp;#39;&lt;&#x2F;span&gt;&lt;span style=&quot;color:#a3be8c;&quot;&gt;Contents[].[Key]&lt;&#x2F;span&gt;&lt;span&gt;&amp;#39; | &lt;&#x2F;span&gt;&lt;span style=&quot;color:#bf616a;&quot;&gt;grep -v -e &lt;&#x2F;span&gt;&lt;span&gt;&amp;quot;&lt;&#x2F;span&gt;&lt;span style=&quot;color:#a3be8c;&quot;&gt;&amp;#39;&lt;&#x2F;span&gt;&lt;span&gt;&amp;quot; | &lt;&#x2F;span&gt;&lt;span style=&quot;color:#bf616a;&quot;&gt;tr &lt;&#x2F;span&gt;&lt;span&gt;&amp;#39;&lt;&#x2F;span&gt;&lt;span style=&quot;color:#a3be8c;&quot;&gt;\n&lt;&#x2F;span&gt;&lt;span&gt;&amp;#39; &amp;#39;&lt;&#x2F;span&gt;&lt;span style=&quot;color:#a3be8c;&quot;&gt;\0&lt;&#x2F;span&gt;&lt;span&gt;&amp;#39; | &lt;&#x2F;span&gt;&lt;span style=&quot;color:#bf616a;&quot;&gt;xargs -0 -P2 -n500&lt;&#x2F;span&gt;&lt;span&gt; bash&lt;&#x2F;span&gt;&lt;span style=&quot;color:#bf616a;&quot;&gt; -c &lt;&#x2F;span&gt;&lt;span&gt;\
&lt;&#x2F;span&gt;&lt;span&gt;&amp;#39;&lt;&#x2F;span&gt;&lt;span style=&quot;color:#a3be8c;&quot;&gt;aws s3api delete-objects --bucket $BUCKET --delete &amp;quot;Objects=[$(printf &amp;quot;{Key=%q},&amp;quot; &amp;quot;$@&amp;quot;)], Quiet=true&amp;quot;&lt;&#x2F;span&gt;&lt;span&gt;&amp;#39; _ 
&lt;&#x2F;span&gt;&lt;&#x2F;code&gt;&lt;&#x2F;pre&gt;
</description>
      </item>
      <item>
          <title>It&#x27;s still day 1...</title>
          <pubDate>Sun, 07 May 2023 21:39:46 -0600</pubDate>
          <author>Unknown</author>
          <link>https://kcirtapfromspace.github.io/kcirtap-blog/posts/its-still-day-1/</link>
          <guid>https://kcirtapfromspace.github.io/kcirtap-blog/posts/its-still-day-1/</guid>
          <description xml:base="https://kcirtapfromspace.github.io/kcirtap-blog/posts/its-still-day-1/">&lt;h2 id=&quot;backstory&quot;&gt;Backstory&lt;&#x2F;h2&gt;
&lt;p&gt;How does one self manage their thoughts, time, aspirations, or memories? Recently, while I was Facetiming my brother I noticed that he had hand written notes on one of his hands. I had an instant flashback to high school where I often did the same thing for tracking phone numbers, assignments, or just drawing a face (though my brother is nearing his 30s). I realized i’ve been hacking through ways to track and tame my brain’s musings, upcoming tasks, and memories for years. I’ve gone through numerous phases such as photographing everything with a DSLR, carrying back pocket note cards and a space pen, there was the fountain pen era, post-it notes, journals, legal pads, whiteboards, todo lists, post-its, kanban boards.  Year after year I usually find there is always something a bit lacking with me and my process.&lt;&#x2F;p&gt;
&lt;p&gt;First up is me, I’m a bit scatter brained often moving from one shiny object to the the next. I do really well when I have structure and I know all the i’ve done all my 7Ps (prior proper planning prevents piss poor performance). The end result is striving to do better and technology help with this, key point is tagging anything I can with Air Tags. Finding my keys no big deal, sunglasses on the other hand i’m probably on my third attempt scouring the house. What this actually means is that I’ve spent years building out processes that work for me and tossing out what doesn’t. Right now I feel like i’m in an aggressive Marie Kando phase.&lt;&#x2F;p&gt;
&lt;h3 id=&quot;rule-one-no-loosies&quot;&gt;Rule One: No Loosies&lt;&#x2F;h3&gt;
&lt;p&gt;Legal Pads, post-it notes, blank sheets of paper are a no go. I got myself into a bad habit of scribbling little notes(actually conceptually important trinkets of knowledge) in piles or heaps on my desk. My shiny object brain know has a weird spider sense on knowing where I remember some context that I had tracked there. That system had to go, all it did for me was build clutter in my workspace forcing me to dig around looking for that one legal pad I wrote my note on three months ago which now found itself acting as the wheel block so the Lego Bugatti doesn’t roll off the shelf.&lt;&#x2F;p&gt;
&lt;h3 id=&quot;rule-two-context-segmentation&quot;&gt;Rule Two: Context Segmentation&lt;&#x2F;h3&gt;
&lt;p&gt;I’ve got three daily driver notebooks, one I use to scribble out a diagrams real quick, a recipe cookbook, and travel companion. I don’t think these need to go to the waste side. Keeping ideas or thoughts compartmentalized in journals is useful, you still get the tactile experience of putting pen to paper to capture in the moment. I like to call this napkin math, a means capture ephemeral ideas not fully formed, they might need to stew for little more time. Gone is the time for notebooks &amp;amp; journals to be scattered throughout my office, kitchen, and various bags. For the most part these are in place to  act as a brain aid, not to be used for record keeping.&lt;&#x2F;p&gt;
&lt;h3 id=&quot;rule-three-consolidate-the-digital-brain&quot;&gt;Rule Three: Consolidate the Digital Brain&lt;&#x2F;h3&gt;
&lt;p&gt;For a while, I found myself a groove with using OneNote and was pretty consistent user (albeit a very basic user). I found comfort in  forming a ritualistic reliable means of keeping a daily activity log, what I was going to to next, planning my projects kept me really focused. However, changing jobs, a major data loss incident due to  OneNotes weird syncing, their haphazard migration to cloud product, my migration from android to iOS, and lastly to various nit picks with the OneNote product, I gave up on aspirations OneNote being my source of truth for digital note taking.&lt;&#x2F;p&gt;
&lt;p&gt;Since then my note taking experience has been subpar, I’ve slowly fragmented across services like Apple notes, Google Keep, Evernote, Notion, OneNote, Bear, email, or for a while just google docs. I’ve spent five or so years trying to rebuild what I perceive as a good habit. This year I decided i’d attempt reset the foundations though, with a bit more of a constructive focus. I started out giving &lt;a href=&quot;https:&#x2F;&#x2F;roamresearch.com&#x2F;&quot;&gt;roam research&lt;&#x2F;a&gt; a shot it immediately didn’t take. I actually dusted off OneNote again (again its been like a half decade) and it was like reuniting with a familiar friend with improvements. It sync was better and for my initial purpose of tracking notes, logs, and what haves it was doing just fine. I was bringing back that structure that I was looking to have.&lt;&#x2F;p&gt;
&lt;p&gt;Around this time I was seeing similar conversations in my companies slack channels with whispers and conversations about what people are doing about notes. This one use Obsidian, or another on Notion,  Roam and a few others also kept popping up. I wasn’t completely enamored with OneNote as I’ve been burned in the past. I’d figure I could install Obsidian and see what the fuss was about. I got off to a slow start, I didn’t dive too deep into the product and really only knew it supported markdown notes (my current preferred digital format). I let it sit on the shelf for a couple months.&lt;&#x2F;p&gt;
&lt;h3 id=&quot;rule-four-use-tooling-you-like&quot;&gt;Rule Four: Use Tooling You Like&lt;&#x2F;h3&gt;
&lt;p&gt;I’m fond of mocking up my understanding of how something works by drawing some lines accompanied with some notes. Don’t know why but if you give me some lines and a list I can sigh with relief that i’m going to be able to follow along, if you add some color i’m totally smitten. I’m particularly a fan of the robust offerings available to aid these efforts. Products Mermaid Diagrams, Plantuml, Mural, LucidCharts and others have all aided me at some point. Couple some of those tools with principles of C4 context diagrams. You now have a programatic means to convey logical understanding various participant personas.&lt;&#x2F;p&gt;
&lt;p&gt;People and Teams can programmatically create&#x2F;generate diagrams reflect the logical structures of the applications, databases, or an entire code base. With CI&#x2F;CD tooling updates the codebase can be reflected on every merge keeping these diagrams versioned throughout the application lifecycle. Though, when attempting to convey understanding to a wider audience that may not as in-tune with a codebase, newly learning ideas, or just might not be so technical. I’ve noticed that color coordinated and imaged based diagrams help to bridge the gap. By giving people something to attach to and recognize people may have an easier time to follow along. You can see this with content produced by &lt;a href=&quot;https:&#x2F;&#x2F;bytebytego.com&#x2F;&quot;&gt;ByteByteGo&lt;&#x2F;a&gt; or the team with Swirl AI. I personally gravitated to the content that Aurimas Griciūnas of &lt;a href=&quot;https:&#x2F;&#x2F;www.swirlai.com&#x2F;&quot;&gt;Swirl AI&lt;&#x2F;a&gt;was producing. I absoutely needed to know what he was using to build his diagrams. I reach out to found he’s building things out about Excalidraw. I immediately started using it to mock out my understanding of complex services ideas.&lt;&#x2F;p&gt;
&lt;h3 id=&quot;rule-five-document-and-share-your-experiences&quot;&gt;Rule Five: Document and Share your experiences&lt;&#x2F;h3&gt;
&lt;p&gt;Really that it… Make some content good or bad and share that out.&lt;&#x2F;p&gt;
&lt;h3 id=&quot;lightbulbs&quot;&gt;Lightbulbs&lt;&#x2F;h3&gt;
&lt;p&gt;To catch things back up, I’m doodling in designated notebooks, I’m re-establishing good digital habits(mostly in OneNote), and at this point i’m newly mocking up Excalidraw diagrams which also coincides to writing content for internal engineering blog articles and presentations. During my Digital Kando phase former colleague of mine was a prolific sharer of tech tidbits, articles, repos, and tweets all related to software engineering or just cool things he found. At the company his shares gained somewhat of a internal cult following, with a dedicated slack channel. He had just implemented an RSS feed for all his shares(there’s a lot). I was curious how he was managing this firehose of information. So the secrets out…he’s just using Obsidian to take note of anything he finds cool then converts the notes to a website. See his site and rss feed: &lt;a href=&quot;https:&#x2F;&#x2F;notes.billmill.org&quot;&gt;notes.billmill.org&lt;&#x2F;a&gt; Or check out the &lt;a href=&quot;https:&#x2F;&#x2F;github.com&#x2F;llimllib&#x2F;notes&quot;&gt;source code&lt;&#x2F;a&gt;&lt;&#x2F;p&gt;
&lt;p&gt;This was largely beginning of my the light bulb moment&#x2F;s. I needed to doubled down on Obsidian, ditch OneNote, and maybe I could just do the same but submit content to substack. This is were I went exploring the community plugins and the top download is local version Excalidraw with the ability community scripts to wrangle diagrams. Game over.. I’m making this happen: Obsidian notes based dev blog with diagrams, just need to sort out the little bits.&lt;&#x2F;p&gt;
&lt;h2 id=&quot;dev-blog&quot;&gt;Dev Blog&lt;&#x2F;h2&gt;
&lt;p&gt;I have an on again off again relationship with dev blogs. I think this is my third iteration of hosting one, after a venture using umbraco, then ghost, and now Hugo(i’m not counting geocities). Let’s see if it last more than a month. 🤞&lt;&#x2F;p&gt;
&lt;p&gt;This time I at least have a scope: host dev blog using a static content generator, use CI&#x2F;CD to provision IaC, do it serverless, live update content from Obsidian notes.&lt;&#x2F;p&gt;
&lt;h3 id=&quot;static-generator&quot;&gt;Static Generator&lt;&#x2F;h3&gt;
&lt;p&gt;So for for choosing a static website generator I went Hugo, it seemed popular and there a ton of themes with various support.
I picked the LoveIt theme as it looked basic enough and had dark mode.&lt;&#x2F;p&gt;
&lt;h3 id=&quot;hosting&quot;&gt;Hosting&lt;&#x2F;h3&gt;
&lt;p&gt;Netlifty hosting is certainly the easiest, and AWS Amplify was a quick 2 stepper to get working. That’s not quite what I wanted while it was easy. I was looking to surface the unseen leavers behind those services leavers being pulled or at least to some degree. So, there were a couple hiccups along my way. Like needing to setup an AWS org so I could use OIDC role logins (for myself and machine roles), going through the new AWS Identity Center SSO. Coordinating obsidian git sync was both straight forward to setup and awful, but I fee like I read into it too much, so had some churn there. Live updating on content pushes to GitHub was actually way easy, until i thought about immutable deploys. I still think my most churn hard parts were head banging moments on security headers being blocked and invalidating all the CDN content.&lt;&#x2F;p&gt;
&lt;p&gt;Look! Embedded diagram mocked up in my note taking app:&lt;&#x2F;p&gt;
&lt;p&gt;{{&amp;lt; image src=“static&#x2F;the_easy_way.png” caption=“The Easy Way” alt=“Architecture Layout of Dev Blog using Obsidian Hugo and AWS” width=“100%” &amp;gt;}}&lt;&#x2F;p&gt;
&lt;p&gt;{{&amp;lt; image src=“static&#x2F;the_irritating_way.png” caption=“The Irritating Way” alt=“Architecture Layout of Dev Blog using Obsidian Hugo and AWS” width=“100%” &amp;gt;}}&lt;&#x2F;p&gt;
&lt;h3 id=&quot;live-pushes-from-obsidian&quot;&gt;Live pushes from obsidian&lt;&#x2F;h3&gt;
&lt;p&gt;Obsidian it pretty good with its auto commits and backup to the remote origin GitHub.&lt;&#x2F;p&gt;
&lt;div&gt;{{&lt; image src=&quot;static&#x2F;obsidian_git.png&quot; caption=&quot;Obsidian Source Control&quot; alt=&quot;Obsidian Source Control&quot; height=&quot;50&quot; width=&quot;50&quot; &gt;}}
{{&lt; image src=&quot;static&#x2F;unpublished_git.png&quot; caption=&quot;Obsidian Auto Commit&quot; alt=&quot;Obsidian Auto Commit&quot; height=&quot;50&quot; width=&quot;50&quot; &gt;}}
&lt;&#x2F;div&gt;
&lt;p&gt;After coordinating the git sync between all my devices and GitHub. I setup a GitHub actions trigger to refresh the content on the fly.  I originally started with using Repository Dispatch to manage the event which worked without issues.&lt;&#x2F;p&gt;
&lt;pre data-lang=&quot;yml&quot; style=&quot;background-color:#2b303b;color:#c0c5ce;&quot; class=&quot;language-yml &quot;&gt;&lt;code class=&quot;language-yml&quot; data-lang=&quot;yml&quot;&gt;&lt;span style=&quot;color:#bf616a;&quot;&gt;name&lt;&#x2F;span&gt;&lt;span&gt;: &lt;&#x2F;span&gt;&lt;span style=&quot;color:#a3be8c;&quot;&gt;Trigger Hugo Update on Push to Main
&lt;&#x2F;span&gt;&lt;span&gt;
&lt;&#x2F;span&gt;&lt;span style=&quot;color:#d08770;&quot;&gt;on&lt;&#x2F;span&gt;&lt;span&gt;:
&lt;&#x2F;span&gt;&lt;span&gt;  &lt;&#x2F;span&gt;&lt;span style=&quot;color:#bf616a;&quot;&gt;push&lt;&#x2F;span&gt;&lt;span&gt;:
&lt;&#x2F;span&gt;&lt;span&gt;    &lt;&#x2F;span&gt;&lt;span style=&quot;color:#bf616a;&quot;&gt;branches&lt;&#x2F;span&gt;&lt;span&gt;:
&lt;&#x2F;span&gt;&lt;span&gt;      - &lt;&#x2F;span&gt;&lt;span style=&quot;color:#a3be8c;&quot;&gt;main
&lt;&#x2F;span&gt;&lt;span&gt;    &lt;&#x2F;span&gt;&lt;span style=&quot;color:#bf616a;&quot;&gt;paths&lt;&#x2F;span&gt;&lt;span&gt;:
&lt;&#x2F;span&gt;&lt;span&gt;      - &amp;#39;&lt;&#x2F;span&gt;&lt;span style=&quot;color:#a3be8c;&quot;&gt;the_archives&#x2F;**&lt;&#x2F;span&gt;&lt;span&gt;&amp;#39;
&lt;&#x2F;span&gt;&lt;span style=&quot;color:#bf616a;&quot;&gt;jobs&lt;&#x2F;span&gt;&lt;span&gt;:
&lt;&#x2F;span&gt;&lt;span&gt;  &lt;&#x2F;span&gt;&lt;span style=&quot;color:#bf616a;&quot;&gt;trigger_hugo_update&lt;&#x2F;span&gt;&lt;span&gt;:
&lt;&#x2F;span&gt;&lt;span&gt;    &lt;&#x2F;span&gt;&lt;span style=&quot;color:#bf616a;&quot;&gt;runs-on&lt;&#x2F;span&gt;&lt;span&gt;: &lt;&#x2F;span&gt;&lt;span style=&quot;color:#a3be8c;&quot;&gt;ubuntu-latest
&lt;&#x2F;span&gt;&lt;span&gt;
&lt;&#x2F;span&gt;&lt;span&gt;    &lt;&#x2F;span&gt;&lt;span style=&quot;color:#bf616a;&quot;&gt;steps&lt;&#x2F;span&gt;&lt;span&gt;:
&lt;&#x2F;span&gt;&lt;span&gt;      - &lt;&#x2F;span&gt;&lt;span style=&quot;color:#bf616a;&quot;&gt;name&lt;&#x2F;span&gt;&lt;span&gt;: &lt;&#x2F;span&gt;&lt;span style=&quot;color:#a3be8c;&quot;&gt;Commit empty message to target repo
&lt;&#x2F;span&gt;&lt;span&gt;        &lt;&#x2F;span&gt;&lt;span style=&quot;color:#bf616a;&quot;&gt;run&lt;&#x2F;span&gt;&lt;span&gt;: &lt;&#x2F;span&gt;&lt;span style=&quot;color:#b48ead;&quot;&gt;|
&lt;&#x2F;span&gt;&lt;span style=&quot;color:#a3be8c;&quot;&gt;          git config --global user.email &amp;quot;github-actions@example.com&amp;quot;
&lt;&#x2F;span&gt;&lt;span style=&quot;color:#a3be8c;&quot;&gt;          git config --global user.name &amp;quot;GitHub Actions&amp;quot;
&lt;&#x2F;span&gt;&lt;span style=&quot;color:#a3be8c;&quot;&gt;          git clone &amp;quot;https:&#x2F;&#x2F;${{ secrets.PAT }}@github.com&#x2F;kcirtapfromspace&#x2F;kcirtap.io.git&amp;quot;
&lt;&#x2F;span&gt;&lt;span style=&quot;color:#a3be8c;&quot;&gt;          cd kcirtap.io
&lt;&#x2F;span&gt;&lt;span style=&quot;color:#a3be8c;&quot;&gt;          git checkout main
&lt;&#x2F;span&gt;&lt;span style=&quot;color:#a3be8c;&quot;&gt;          git commit --allow-empty -m &amp;quot;docs: Obsidian Disbatch &amp;quot;ref&amp;quot;: &amp;quot;${{ github.ref }}&amp;quot;, sha: ${{ github.sha }}&amp;quot;
&lt;&#x2F;span&gt;&lt;span style=&quot;color:#a3be8c;&quot;&gt;          git push
&lt;&#x2F;span&gt;&lt;span&gt;      &lt;&#x2F;span&gt;&lt;span style=&quot;color:#65737e;&quot;&gt;# - name: Repository Dispatch
&lt;&#x2F;span&gt;&lt;span&gt;      &lt;&#x2F;span&gt;&lt;span style=&quot;color:#65737e;&quot;&gt;#   uses: peter-evans&#x2F;repository-dispatch@v2
&lt;&#x2F;span&gt;&lt;span&gt;      &lt;&#x2F;span&gt;&lt;span style=&quot;color:#65737e;&quot;&gt;#   with:
&lt;&#x2F;span&gt;&lt;span&gt;      &lt;&#x2F;span&gt;&lt;span style=&quot;color:#65737e;&quot;&gt;#     token: ${{ secrets.PAT }}
&lt;&#x2F;span&gt;&lt;span&gt;      &lt;&#x2F;span&gt;&lt;span style=&quot;color:#65737e;&quot;&gt;#     repository: kcirtapfromspace&#x2F;kcirtap.io
&lt;&#x2F;span&gt;&lt;span&gt;      &lt;&#x2F;span&gt;&lt;span style=&quot;color:#65737e;&quot;&gt;#     event-type: obsidian_push
&lt;&#x2F;span&gt;&lt;span&gt;      &lt;&#x2F;span&gt;&lt;span style=&quot;color:#65737e;&quot;&gt;#     client-payload: &amp;#39;{&amp;quot;ref&amp;quot;: &amp;quot;${{ github.ref }}&amp;quot;, &amp;quot;sha&amp;quot;: &amp;quot;${{ github.sha }}&amp;quot;}&amp;#39;
&lt;&#x2F;span&gt;&lt;&#x2F;code&gt;&lt;&#x2F;pre&gt;
&lt;p&gt;I ended up using a custom git action to send empty commit to the website repos as  it provides a more informative message within github actions. It also played a little nicer with incremental versioning action please-release.
trigger_screenshot&lt;&#x2F;p&gt;
&lt;p&gt;{{&amp;lt; image src=“static&#x2F;trigger_screenshot.png” caption=“Repository Disbatch vs Empty Commit” alt=“Repository Disbatch vs Empty Commit” width=“100%” &amp;gt;}}&lt;&#x2F;p&gt;
&lt;p&gt;Here is the sequence what I ended up putting together.&lt;&#x2F;p&gt;
&lt;p&gt;{{&amp;lt; image src=“static&#x2F;sequence_diagram.png” caption=“Repository Disbatch vs Empty Commit” alt=“Sequence Diagram” width=“100%” &amp;gt;}}&lt;&#x2F;p&gt;
&lt;h4 id=&quot;immutable-production-clone&quot;&gt;immutable production clone&lt;&#x2F;h4&gt;
&lt;p&gt;I have a folder in my obsidian tree that I use to build the site content. New content applied to the directory triggers builds for blog site. Immutable deploys are handled by please-relase action. This action creates a PR for these PRs production steps are used to fetch the last empty commit with the correct content commit SHA. This commit is then used to sparse clone only the content directory into the build action.  Content is injected into the Hugo build step to generate the static site. With the Release PR the site is published and the CDN cash is invalidated.&lt;&#x2F;p&gt;
&lt;pre data-lang=&quot;sh&quot; style=&quot;background-color:#2b303b;color:#c0c5ce;&quot; class=&quot;language-sh &quot;&gt;&lt;code class=&quot;language-sh&quot; data-lang=&quot;sh&quot;&gt;&lt;span style=&quot;color:#bf616a;&quot;&gt;git&lt;&#x2F;span&gt;&lt;span&gt; config&lt;&#x2F;span&gt;&lt;span style=&quot;color:#bf616a;&quot;&gt; --global&lt;&#x2F;span&gt;&lt;span&gt; user.email &amp;quot;&lt;&#x2F;span&gt;&lt;span style=&quot;color:#a3be8c;&quot;&gt;github-actions@example.com&lt;&#x2F;span&gt;&lt;span&gt;&amp;quot;
&lt;&#x2F;span&gt;&lt;span style=&quot;color:#bf616a;&quot;&gt;git&lt;&#x2F;span&gt;&lt;span&gt; config&lt;&#x2F;span&gt;&lt;span style=&quot;color:#bf616a;&quot;&gt; --global&lt;&#x2F;span&gt;&lt;span&gt; user.name &amp;quot;&lt;&#x2F;span&gt;&lt;span style=&quot;color:#a3be8c;&quot;&gt;GitHub Actions&lt;&#x2F;span&gt;&lt;span&gt;&amp;quot;
&lt;&#x2F;span&gt;&lt;span style=&quot;color:#bf616a;&quot;&gt;git&lt;&#x2F;span&gt;&lt;span&gt; clone&lt;&#x2F;span&gt;&lt;span style=&quot;color:#bf616a;&quot;&gt; --filter&lt;&#x2F;span&gt;&lt;span&gt;=blob:none&lt;&#x2F;span&gt;&lt;span style=&quot;color:#bf616a;&quot;&gt; --no-checkout --sparse&lt;&#x2F;span&gt;&lt;span&gt; git@github.com:kcirtapfromspace&#x2F;obsidian.git obsidian
&lt;&#x2F;span&gt;&lt;span style=&quot;color:#96b5b4;&quot;&gt;cd&lt;&#x2F;span&gt;&lt;span&gt; obsidian
&lt;&#x2F;span&gt;&lt;span style=&quot;color:#bf616a;&quot;&gt;git&lt;&#x2F;span&gt;&lt;span&gt; sparse-checkout init&lt;&#x2F;span&gt;&lt;span style=&quot;color:#bf616a;&quot;&gt; --cone
&lt;&#x2F;span&gt;&lt;span style=&quot;color:#bf616a;&quot;&gt;git&lt;&#x2F;span&gt;&lt;span&gt; fetch origin d13113fdc7b31b1d46bedd197a906ae90553c2a8
&lt;&#x2F;span&gt;&lt;span style=&quot;color:#bf616a;&quot;&gt;git&lt;&#x2F;span&gt;&lt;span&gt; checkout d13113fdc7b31b1d46bedd197a906ae90553c2a8
&lt;&#x2F;span&gt;&lt;span style=&quot;color:#bf616a;&quot;&gt;git&lt;&#x2F;span&gt;&lt;span&gt; sparse-checkout set the_archives
&lt;&#x2F;span&gt;&lt;span style=&quot;color:#bf616a;&quot;&gt;ls -lah
&lt;&#x2F;span&gt;&lt;span style=&quot;color:#bf616a;&quot;&gt;mv&lt;&#x2F;span&gt;&lt;span&gt; the_archives ..&#x2F;src&#x2F;content&#x2F;
&lt;&#x2F;span&gt;&lt;&#x2F;code&gt;&lt;&#x2F;pre&gt;
&lt;h3 id=&quot;initial-nits&quot;&gt;Initial nits&lt;&#x2F;h3&gt;
&lt;p&gt;I am a bit peeved that I need to use HTML to render the images correctly. Native markdown seems to mangle the image. There is  a quirk with “ShortCode” that allows code comments to render the images inside fencing if using &lt;code&gt;{{}}&lt;&#x2F;code&gt; The Hugo theme I picked doesn’t support svg rendering, so that another rabbit hole to dive down to navigate partial layout overrides.&lt;&#x2F;p&gt;
&lt;pre data-lang=&quot;html&quot; style=&quot;background-color:#2b303b;color:#c0c5ce;&quot; class=&quot;language-html &quot;&gt;&lt;code class=&quot;language-html&quot; data-lang=&quot;html&quot;&gt;&lt;span&gt;&amp;lt; image src=&amp;quot;static&#x2F;the_easy_way.png&amp;quot; caption=&amp;quot;The Easy Way&amp;quot; alt=&amp;quot;Architecture Layout of Dev Blog using Obsidian Hugo and AWS&amp;quot;width=&amp;quot;100%&amp;quot; &amp;gt;
&lt;&#x2F;span&gt;&lt;span&gt;
&lt;&#x2F;span&gt;&lt;span&gt;&amp;lt; image src=&amp;quot;static&#x2F;the_irritating_way.png&amp;quot; caption=&amp;quot;The Irritating Way&amp;quot; alt=&amp;quot;Architecture Layout of Dev Blog using Obsidian Hugo and AWS&amp;quot;width=&amp;quot;100%&amp;quot; &amp;gt;
&lt;&#x2F;span&gt;&lt;&#x2F;code&gt;&lt;&#x2F;pre&gt;
&lt;h2 id=&quot;wip-the-little-bits&quot;&gt;WIP: the little bits&lt;&#x2F;h2&gt;
&lt;p&gt;Obsidian Note Taking
Live git commit &amp;amp; sync(across devices via GitHub)
Write Markdown Notes, Blogs, Musings
Template Generator
Excalidraw&lt;&#x2F;p&gt;
&lt;p&gt;Static site generator (needs markdown support)
Deploy Wireframe site
Terraform
State
s3 bucket
dynamodb
make sure your table names in terraform match up
infra
Route53
Static Site S3 bucket
Cloud Front CDN
origin story
Ciphers
fighting with ciphers and other whatnots
Security Headers
X-site scripting
Edge Lambda (rewrite paths)
WAF
Certificate
Needs san name to pass
KMS
encrypt the bucket, dynamodb table
IAM
Github Federated Idp
Well known fingerprint
Github Actions Assume Role
ci&#x2F;cd
github actions
terraform deploy
sparse clone for hugo deploy
uses deploy key to fetch private repo
obsidian event push action
uses pat token
CloudFront Al
release triggers are hard and finicky&lt;&#x2F;p&gt;
</description>
      </item>
      <item>
          <title>Tiers of compliance</title>
          <pubDate>Mon, 01 May 2023 16:22:46 -0600</pubDate>
          <author>Unknown</author>
          <link>https://kcirtapfromspace.github.io/kcirtap-blog/posts/compliance/</link>
          <guid>https://kcirtapfromspace.github.io/kcirtap-blog/posts/compliance/</guid>
          <description xml:base="https://kcirtapfromspace.github.io/kcirtap-blog/posts/compliance/">&lt;h1 id=&quot;regulatory-compliance-in-the-cloud&quot;&gt;Regulatory Compliance in the Cloud&lt;&#x2F;h1&gt;
&lt;p&gt;Compliance is a critical aspect of any organization’s operations, particularly when it comes to data security and privacy. In the realm of cloud computing, compliance requirements can be complex and varied, depending on the industry and regulatory environment. This note provides an overview of the different tiers of compliance, including FISMA, FedRAMP, and HIPAA, and outlines the specific control areas that organizations must address to achieve compliance. By understanding these requirements, organizations can ensure that their cloud infrastructure meets the necessary standards for data security and privacy.&lt;&#x2F;p&gt;
&lt;h2 id=&quot;fisma-compliance&quot;&gt;FISMA Compliance&lt;&#x2F;h2&gt;
&lt;table&gt;&lt;thead&gt;&lt;tr&gt;&lt;th&gt;Control Area&lt;&#x2F;th&gt;&lt;th&gt;FISMA Low&lt;&#x2F;th&gt;&lt;th&gt;FISMA Moderate&lt;&#x2F;th&gt;&lt;th&gt;FISMA High&lt;&#x2F;th&gt;&lt;&#x2F;tr&gt;&lt;&#x2F;thead&gt;&lt;tbody&gt;
&lt;tr&gt;&lt;td&gt;Identity &amp;amp; Access Management&lt;&#x2F;td&gt;&lt;td&gt;- Basic IAM policies&lt;br&gt;- MFA for privileged users&lt;br&gt;- Regular access reviews.&lt;&#x2F;td&gt;&lt;td&gt;- More stringent IAM policies&lt;br&gt;- MFA for all users&lt;br&gt;- Role-based access control&lt;br&gt;- Regular access audits&lt;br&gt;- Temporary credentials for short-term access&lt;&#x2F;td&gt;&lt;td&gt;- Enhanced user monitoring&lt;br&gt;- Continuous access auditing&lt;br&gt;- Additional access restrictions&lt;br&gt;- Session duration limitations&lt;br&gt;- Just-In-Time (JIT) access&lt;&#x2F;td&gt;&lt;&#x2F;tr&gt;
&lt;tr&gt;&lt;td&gt;Data Encryption&lt;&#x2F;td&gt;&lt;td&gt;- Encryption at rest (KMS, S3, EBS)&lt;br&gt;- Encryption in transit (TLS)&lt;&#x2F;td&gt;&lt;td&gt;- More stringent encryption key management&lt;br&gt;- Automated key rotation&lt;br&gt;- Dedicated KMS Customer Master Keys (CMKs)&lt;&#x2F;td&gt;&lt;td&gt;- Enhanced encryption algorithms&lt;br&gt;- Hardware security modules (HSM) integration&lt;br&gt;- Stronger key management policies&lt;br&gt;- Key access and usage logging&lt;&#x2F;td&gt;&lt;&#x2F;tr&gt;
&lt;tr&gt;&lt;td&gt;Network Security&lt;&#x2F;td&gt;&lt;td&gt;- Basic VPC setup&lt;br&gt;- Security groups&lt;br&gt;- Network ACLs&lt;&#x2F;td&gt;&lt;td&gt;- Enhanced VPC isolation&lt;br&gt;- WAF&lt;br&gt;- Network traffic monitoring&lt;br&gt;- Intrusion detection&lt;br&gt;- VPN or Direct Connect for hybrid environments&lt;&#x2F;td&gt;&lt;td&gt;- Advanced network protection&lt;br&gt;- Anomaly detection&lt;br&gt;- More comprehensive traffic analysis&lt;br&gt;- Micro-segmentation of network resources&lt;br&gt;- PrivateLink for service access&lt;&#x2F;td&gt;&lt;&#x2F;tr&gt;
&lt;tr&gt;&lt;td&gt;Logging &amp;amp; Monitoring&lt;&#x2F;td&gt;&lt;td&gt;- Basic CloudTrail and CloudWatch setup&lt;br&gt;- Log storage and retention&lt;&#x2F;td&gt;&lt;td&gt;- Extended logging (S3, Lambda, RDS, etc.)&lt;br&gt;- More granular monitoring&lt;br&gt;- Regular audits&lt;br&gt;- AWS Config for resource tracking&lt;&#x2F;td&gt;&lt;td&gt;- Real-time continuous monitoring&lt;br&gt;- Advanced analytics (Amazon Elasticsearch, Kinesis)&lt;br&gt;- Automated response capabilities (Lambda, Step Functions)&lt;br&gt;- Centralized logging across accounts and regions&lt;&#x2F;td&gt;&lt;&#x2F;tr&gt;
&lt;tr&gt;&lt;td&gt;Patch Management &amp;amp; Vulnerability Scanning&lt;&#x2F;td&gt;&lt;td&gt;- Regular patching and updates&lt;br&gt;- Basic vulnerability scanning&lt;&#x2F;td&gt;&lt;td&gt;- Rigorous patch management&lt;br&gt;- Regular vulnerability scanning (Amazon Inspector)&lt;br&gt;- Remediation processes and tracking&lt;&#x2F;td&gt;&lt;td&gt;- Continuous vulnerability scanning&lt;br&gt;- More aggressive patch management processes&lt;br&gt;- Integration with security information and event management (SIEM) tools&lt;&#x2F;td&gt;&lt;&#x2F;tr&gt;
&lt;tr&gt;&lt;td&gt;Backup &amp;amp; Disaster Recovery&lt;&#x2F;td&gt;&lt;td&gt;- Basic backup (S3, EBS, RDS)&lt;br&gt;- Recovery processes&lt;&#x2F;td&gt;&lt;td&gt;- More frequent backups&lt;br&gt;- Enhanced recovery processes&lt;br&gt;- Regular testing&lt;br&gt;- Cross-region replication for backups&lt;&#x2F;td&gt;&lt;td&gt;- High availability and redundancy&lt;br&gt;- Multi-region deployment&lt;br&gt;- Shorter recovery time objectives (RTO) and recovery point objectives (RPO)&lt;br&gt;- Versioning and backup validation&lt;&#x2F;td&gt;&lt;&#x2F;tr&gt;
&lt;tr&gt;&lt;td&gt;Incident Response&lt;&#x2F;td&gt;&lt;td&gt;- Basic incident response plan&lt;br&gt;- Notification and escalation processes&lt;&#x2F;td&gt;&lt;td&gt;- Enhanced incident response plan&lt;br&gt;- Regular testing and updates&lt;br&gt;- Incident response team and training&lt;&#x2F;td&gt;&lt;td&gt;- Advanced incident response capabilities&lt;br&gt;- Automated response (Lambda, Step Functions)&lt;br&gt;- Continuous improvement based on lessons learned&lt;br&gt;- Integration with external threat intelligence sources&lt;&#x2F;td&gt;&lt;&#x2F;tr&gt;
&lt;tr&gt;&lt;td&gt;Compliance Validation&lt;&#x2F;td&gt;&lt;td&gt;- Regular audits and assessments&lt;br&gt;- Compliance with FISMA Low requirements&lt;&#x2F;td&gt;&lt;td&gt;- More comprehensive audits and assessments&lt;br&gt;- Compliance with FISMA Moderate requirements&lt;br&gt;- AWS Artifact for compliance documentation&lt;&#x2F;td&gt;&lt;td&gt;- Rigorous audits and assessments&lt;br&gt;- Continuous validation&lt;br&gt;- Compliance with FISMA High requirements&lt;br&gt;- Third-party assessments and certifications&lt;&#x2F;td&gt;&lt;&#x2F;tr&gt;
&lt;&#x2F;tbody&gt;&lt;&#x2F;table&gt;
&lt;h2 id=&quot;fedramp-compliance&quot;&gt;FedRAMP Compliance&lt;&#x2F;h2&gt;
&lt;table&gt;&lt;thead&gt;&lt;tr&gt;&lt;th&gt;Control Area&lt;&#x2F;th&gt;&lt;th&gt;FedRAMP Low&lt;&#x2F;th&gt;&lt;th&gt;FedRAMP Moderate&lt;&#x2F;th&gt;&lt;th&gt;FedRAMP High&lt;&#x2F;th&gt;&lt;&#x2F;tr&gt;&lt;&#x2F;thead&gt;&lt;tbody&gt;
&lt;tr&gt;&lt;td&gt;Identity &amp;amp; Access Management&lt;&#x2F;td&gt;&lt;td&gt;- Basic IAM policies&lt;br&gt;- MFA for privileged users&lt;br&gt;- Regular access reviews.&lt;&#x2F;td&gt;&lt;td&gt;- More stringent IAM policies&lt;br&gt;- MFA for all users&lt;br&gt;- Role-based access control&lt;br&gt;- Regular access audits&lt;br&gt;- Temporary credentials for short-term access&lt;&#x2F;td&gt;&lt;td&gt;- Enhanced user monitoring&lt;br&gt;- Continuous access auditing&lt;br&gt;- Additional access restrictions&lt;br&gt;- Session duration limitations&lt;br&gt;- Just-In-Time (JIT) access&lt;&#x2F;td&gt;&lt;&#x2F;tr&gt;
&lt;tr&gt;&lt;td&gt;Data Encryption&lt;&#x2F;td&gt;&lt;td&gt;- Encryption at rest (KMS, S3, EBS)&lt;br&gt;- Encryption in transit (TLS)&lt;&#x2F;td&gt;&lt;td&gt;- More stringent encryption key management&lt;br&gt;- Automated key rotation&lt;br&gt;- Dedicated KMS Customer Master Keys (CMKs)&lt;&#x2F;td&gt;&lt;td&gt;- Enhanced encryption algorithms&lt;br&gt;- Hardware security modules (HSM) integration&lt;br&gt;- Stronger key management policies&lt;br&gt;- Key access and usage logging&lt;&#x2F;td&gt;&lt;&#x2F;tr&gt;
&lt;tr&gt;&lt;td&gt;Network Security&lt;&#x2F;td&gt;&lt;td&gt;- Basic VPC setup&lt;br&gt;- Security groups&lt;br&gt;- Network ACLs&lt;&#x2F;td&gt;&lt;td&gt;- Enhanced VPC isolation&lt;br&gt;- WAF&lt;br&gt;- Network traffic monitoring&lt;br&gt;- Intrusion detection&lt;br&gt;- VPN or Direct Connect for hybrid environments&lt;&#x2F;td&gt;&lt;td&gt;- Advanced network protection&lt;br&gt;- Anomaly detection&lt;br&gt;- More comprehensive traffic analysis&lt;br&gt;- Micro-segmentation of network resources&lt;br&gt;- PrivateLink for service access&lt;&#x2F;td&gt;&lt;&#x2F;tr&gt;
&lt;tr&gt;&lt;td&gt;Logging &amp;amp; Monitoring&lt;&#x2F;td&gt;&lt;td&gt;- Basic CloudTrail and CloudWatch setup&lt;br&gt;- Log storage and retention&lt;&#x2F;td&gt;&lt;td&gt;- Extended logging (S3, Lambda, RDS, etc.)&lt;br&gt;- More granular monitoring&lt;br&gt;- Regular audits&lt;br&gt;- AWS Config for resource tracking&lt;&#x2F;td&gt;&lt;td&gt;- Real-time continuous monitoring&lt;br&gt;- Advanced analytics (Amazon Elasticsearch, Kinesis)&lt;br&gt;- Automated response capabilities (Lambda, Step Functions)&lt;br&gt;- Centralized logging across accounts and regions&lt;&#x2F;td&gt;&lt;&#x2F;tr&gt;
&lt;tr&gt;&lt;td&gt;Patch Management &amp;amp; Vulnerability Scanning&lt;&#x2F;td&gt;&lt;td&gt;- Regular patching and updates&lt;br&gt;- Basic vulnerability scanning&lt;&#x2F;td&gt;&lt;td&gt;- Rigorous patch management&lt;br&gt;- Regular vulnerability scanning (Amazon Inspector)&lt;br&gt;- Remediation processes and tracking&lt;&#x2F;td&gt;&lt;td&gt;- Continuous vulnerability scanning&lt;br&gt;- More aggressive patch management processes&lt;br&gt;- Integration with security information and event management (SIEM) tools&lt;&#x2F;td&gt;&lt;&#x2F;tr&gt;
&lt;tr&gt;&lt;td&gt;Backup &amp;amp; Disaster Recovery&lt;&#x2F;td&gt;&lt;td&gt;- Basic backup (S3, EBS, RDS)&lt;br&gt;- Recovery processes&lt;&#x2F;td&gt;&lt;td&gt;- More frequent backups&lt;br&gt;- Enhanced recovery processes&lt;br&gt;- Regular testing&lt;br&gt;- Cross-region replication for backups&lt;&#x2F;td&gt;&lt;td&gt;- High availability and redundancy&lt;br&gt;- Multi-region deployment&lt;br&gt;- Shorter recovery time objectives (RTO) and recovery point objectives (RPO)&lt;br&gt;- Versioning and backup validation&lt;&#x2F;td&gt;&lt;&#x2F;tr&gt;
&lt;tr&gt;&lt;td&gt;Incident Response&lt;&#x2F;td&gt;&lt;td&gt;- Basic incident response plan&lt;br&gt;- Notification and escalation processes&lt;&#x2F;td&gt;&lt;td&gt;- Enhanced incident response plan&lt;br&gt;- Regular testing and updates&lt;br&gt;- Incident response team and training&lt;&#x2F;td&gt;&lt;td&gt;- Advanced incident response capabilities&lt;br&gt;- Automated response (Lambda, Step Functions)&lt;br&gt;- Continuous improvement based on lessons learned&lt;br&gt;- Integration with external threat intelligence sources&lt;&#x2F;td&gt;&lt;&#x2F;tr&gt;
&lt;tr&gt;&lt;td&gt;Compliance Validation&lt;&#x2F;td&gt;&lt;td&gt;- Regular audits and assessments&lt;br&gt;- Compliance with FedRAMP Low requirements&lt;&#x2F;td&gt;&lt;td&gt;- More comprehensive audits and assessments&lt;br&gt;- Compliance with FedRAMP Moderate requirements&lt;br&gt;- AWS Artifact for compliance documentation&lt;&#x2F;td&gt;&lt;td&gt;- Rigorous audits and assessments&lt;br&gt;- Continuous validation&lt;br&gt;- Compliance with FedRAMP High requirements&lt;br&gt;- Third-party assessments and certifications&lt;&#x2F;td&gt;&lt;&#x2F;tr&gt;
&lt;&#x2F;tbody&gt;&lt;&#x2F;table&gt;
&lt;h2 id=&quot;hipaa-compliance&quot;&gt;HIPAA Compliance&lt;&#x2F;h2&gt;
</description>
      </item>
      <item>
          <title>When to NOT use Kubernetes</title>
          <pubDate>Fri, 17 Mar 2023 16:22:46 -0600</pubDate>
          <author>Unknown</author>
          <link>https://kcirtapfromspace.github.io/kcirtap-blog/posts/when-not-to-use-k8s/</link>
          <guid>https://kcirtapfromspace.github.io/kcirtap-blog/posts/when-not-to-use-k8s/</guid>
          <description xml:base="https://kcirtapfromspace.github.io/kcirtap-blog/posts/when-not-to-use-k8s/">&lt;h1 id=&quot;when-to-not-use-kubernetes&quot;&gt;When to NOT use Kubernetes&lt;&#x2F;h1&gt;
&lt;p&gt;Have you ever wondered if using a Kubernetes for a small to medium scale application is worth the complexity and overhead? In many cases, the benefits might not outweigh the costs (financial or emotional) or the added complexity needed to maintain a Kubernetes platform, particularly when the applications require few resources and have low traffic volume. When does Kubernetes make sense for your application? Let’s take a look at some of the scenarios where Kubernetes may not be the best choice. What is the architecture of your application? Kubernetes is a natural fit for microservices, event-driven, and pipeline architectures, while may not be ideal for other architectures like n-tier, microkernel, service-oriented, or service-based architectures.&lt;&#x2F;p&gt;
&lt;h2 id=&quot;what-is-kubernetes&quot;&gt;What is Kubernetes?&lt;&#x2F;h2&gt;
&lt;!-- insert link to diagram in ..&#x2F;diagrams&#x2F;kubernetes.excalidraw.png --&gt;
&lt;!-- &lt;img src=&quot;..&#x2F;diagrams&#x2F;when-not-to-use-k8s&#x2F;kubernetes.excalidraw.png&quot; width=&quot;100%&quot; height=&quot;100%&quot;&gt; --&gt;
&lt;p&gt;{{&amp;lt; image src=“static&#x2F;kubernetes.excalidraw.png” caption=“Kubernetes Architecture” alt=“Shows common baseline with k8s.” width=“100%” &amp;gt;}}&lt;&#x2F;p&gt;
&lt;!-- [![Kubernetes Architecture](static&#x2F;kubernetes.excalidraw.png) --&gt;
&lt;p&gt;Getting back to the to some basics what is Kubernetes? Kubernetes is an open-source container orchestration platform used for automating deployments, scaling, and management of containerized applications. Kubernetes is particularly useful for large-scale applications with high traffic volume, and event-driven, microservice or pipeline architectures, where it provides powerful management, scaling, and deployment capabilities. Examples of such applications include e-commerce platforms, social media networks, and big data processing. In these scenarios, the benefits of Kubernetes often outweigh the complexity and overhead it introduces. Although, Kubernetes has numerous benefits and has proven to be a valuable tool for many developers and organizations, though it’s not always the best solution for every scenario. There are certain situations where it’s better to opt for a different container orchestration platform or not to use one at all.&lt;&#x2F;p&gt;
&lt;h3 id=&quot;containers&quot;&gt;Containers&lt;&#x2F;h3&gt;
&lt;p&gt;Containers offer a lightweight alternative to virtual machines, allowing you to package and run your applications in a standardized environment. Containers are portable, running on any infrastructure that supports them, including on-premises, cloud, or hybrid environments. Being more lightweight and efficient than virtual machines, containers enable running more applications on the same hardware. They are particularly well-suited for applications requiring consistent environments across development, testing, and production stages.&lt;&#x2F;p&gt;
&lt;p&gt;One major design choice to move from serverless to container orchestration platforms, services like Amazon ECS, AWS Fargate, and Kubernetes is code runtime will be heavily dependent on encapsulation within runtime containers. While Docker was once the de facto standard for containerization, alternative containerization tools like Containerd, Podman,  Buildah, CRI-O have gained popularity, partly in response to Docker’s disruptive switch to an Oracle-like licensing model. These alternatives provide additional choices for organizations looking to adopt containerization in their software development and deployment processes.&lt;&#x2F;p&gt;
&lt;h3 id=&quot;kubernetes-features&quot;&gt;Kubernetes Features&lt;&#x2F;h3&gt;
&lt;ul&gt;
&lt;li&gt;Automated rollouts and rollbacks&lt;&#x2F;li&gt;
&lt;li&gt;Service discovery and load balancing&lt;&#x2F;li&gt;
&lt;li&gt;Storage orchestration&lt;&#x2F;li&gt;
&lt;li&gt;Self-healing&lt;&#x2F;li&gt;
&lt;li&gt;Secret and configuration management&lt;&#x2F;li&gt;
&lt;li&gt;Automatic bin packing&lt;&#x2F;li&gt;
&lt;li&gt;Batch execution&lt;&#x2F;li&gt;
&lt;li&gt;Horizontal scaling&lt;&#x2F;li&gt;
&lt;li&gt;IPv4&#x2F;IPv6 dual-stack&lt;&#x2F;li&gt;
&lt;li&gt;Designed for extensibility&lt;&#x2F;li&gt;
&lt;&#x2F;ul&gt;
&lt;p&gt;Now that we vaguely understand with a container is.  Here you’ll see some of the features that Kubernetes promotes. Think on how these features could useful for your operation and maybe that will shed some light on why it’s gaining so much popularity.&lt;&#x2F;p&gt;
&lt;p&gt;Paired with a thriving community, an ever expanding suite of 3rd party integrations, and the ease in which anyone can get started( minikube, kind, k3s),  really helps drive the adoption of K8s.&lt;&#x2F;p&gt;
&lt;p&gt;But it also happens that Kubernetes is particularly useful for large-scale applications with high traffic volume, that have streaming or batch operations and leverage microservices, event-driven, or pipeline architectures.
where Kubernetes provides powerful and consistent foundation for management, scaling, and deployment capabilities.&lt;&#x2F;p&gt;
&lt;p&gt;Examples of such applications that leverage Kubernetes include e-commerce platforms, streaming neworks, social media networks, and big data processing. In these scenarios, the benefits of Kubernetes often outweigh the complexity and overhead it introduces.&lt;&#x2F;p&gt;
&lt;h3 id=&quot;common-architectures-with-k8s&quot;&gt;Common Architectures with K8s&lt;&#x2F;h3&gt;
&lt;h4 id=&quot;microservices&quot;&gt;Microservices&lt;&#x2F;h4&gt;
&lt;p&gt;Microservices architecture involves breaking down an application into a collection of loosely coupled services. Each service is a self-contained unit that can be deployed and scaled independently. This architecture is particularly suitable for Kubernetes, as it can manage the lifecycle, scaling, and deployment of these services efficiently, making the application more scalable, resilient, and easy to maintain.&lt;&#x2F;p&gt;
&lt;p&gt;In this e-commerce application example, multiple services are deployed within a Kubernetes cluster. The API Gateway routes requests to the corresponding service (Authentication, Product, Cart, or Order). Each service communicates with its respective database.&lt;&#x2F;p&gt;
&lt;!-- &lt;img src=&quot;..&#x2F;diagrams&#x2F;when-not-to-use-k8s&#x2F;microservices.excalidraw.png&quot; width=&quot;100%&quot; height=&quot;100%&quot;&gt; --&gt;
&lt;p&gt;{{&amp;lt; image src=“static&#x2F;microservices.excalidraw.png” caption=“microservices Architecture” alt=“Shows common Microservices Architecture.” width=“100%” &amp;gt;}}&lt;&#x2F;p&gt;
&lt;!-- [![Microservices Architecture](static&#x2F;microservices.excalidraw.png)--&gt;
&lt;pre data-lang=&quot;mermaid&quot; style=&quot;background-color:#2b303b;color:#c0c5ce;&quot; class=&quot;language-mermaid &quot;&gt;&lt;code class=&quot;language-mermaid&quot; data-lang=&quot;mermaid&quot;&gt;&lt;span&gt;graph TD
&lt;&#x2F;span&gt;&lt;span&gt;    A[API Gateway] --&amp;gt; B[Authentication Service]
&lt;&#x2F;span&gt;&lt;span&gt;    A --&amp;gt; C[Product Service]
&lt;&#x2F;span&gt;&lt;span&gt;    A --&amp;gt; D[Cart Service]
&lt;&#x2F;span&gt;&lt;span&gt;    A --&amp;gt; E[Order Service]
&lt;&#x2F;span&gt;&lt;span&gt;    B --&amp;gt; F[User Database]
&lt;&#x2F;span&gt;&lt;span&gt;    C --&amp;gt; G[Product Database]
&lt;&#x2F;span&gt;&lt;span&gt;    D --&amp;gt; H[Cart Database]
&lt;&#x2F;span&gt;&lt;span&gt;    E --&amp;gt; I[Order Database]
&lt;&#x2F;span&gt;&lt;span&gt;    subgraph Kubernetes Cluster
&lt;&#x2F;span&gt;&lt;span&gt;    B
&lt;&#x2F;span&gt;&lt;span&gt;    C
&lt;&#x2F;span&gt;&lt;span&gt;    D
&lt;&#x2F;span&gt;&lt;span&gt;    E
&lt;&#x2F;span&gt;&lt;span&gt;    F
&lt;&#x2F;span&gt;&lt;span&gt;    G
&lt;&#x2F;span&gt;&lt;span&gt;    H
&lt;&#x2F;span&gt;&lt;span&gt;    I
&lt;&#x2F;span&gt;&lt;span&gt;    end
&lt;&#x2F;span&gt;&lt;&#x2F;code&gt;&lt;&#x2F;pre&gt;
&lt;h4 id=&quot;event-driven&quot;&gt;Event-Driven&lt;&#x2F;h4&gt;
&lt;p&gt;Event-driven architecture focuses on applications that are triggered by events that occur in the system. These applications react to events from various sources, such as user interactions or changes in data. Kubernetes is a good fit for event-driven applications when they are designed with microservices or when the application components need to be scaled independently. Kubernetes can manage the lifecycle of the application and its services, as well as scale the application to meet demand.&lt;&#x2F;p&gt;
&lt;p&gt;In this IoT application example, the IoT Device and User Application send events to the Event Bus, which routes the events to the appropriate services (Notification Service and Data Processing Service) within the Kubernetes cluster. These services handle notifications and data processing&#x2F;storage.&lt;&#x2F;p&gt;
&lt;!-- &lt;img src=&quot;..&#x2F;diagrams&#x2F;when-not-to-use-k8s&#x2F;event-driven.excalidraw.png&quot; width=&quot;100%&quot; height=&quot;100%&quot;&gt; --&gt;
&lt;p&gt;{{&amp;lt; image src=“static&#x2F;event-driven.excalidraw.png” caption=“microservices Architecture” alt=“Shows common Event-Driven Architecture.” width=“100%” &amp;gt;}}&lt;&#x2F;p&gt;
&lt;!-- [![Event-Driven Architecture](static&#x2F;event-driven.excalidraw.png) --&gt;
&lt;details&gt;
&lt;summary&gt;Mermaid&lt;&#x2F;summary&gt;
&lt;pre style=&quot;background-color:#2b303b;color:#c0c5ce;&quot;&gt;&lt;code&gt;&lt;span&gt;```mermaid
&lt;&#x2F;span&gt;&lt;span&gt;%%{init: {&amp;#39;theme&amp;#39;: &amp;#39;pastel&amp;#39;, &amp;quot;flowchart&amp;quot; : { &amp;quot;curve&amp;quot; : &amp;quot;basis&amp;quot; } } }%%
&lt;&#x2F;span&gt;&lt;span&gt;graph TD
&lt;&#x2F;span&gt;&lt;span&gt;    A[IoT Device] --&amp;gt; B[Event Bus]
&lt;&#x2F;span&gt;&lt;span&gt;    C[User Application] --&amp;gt; B
&lt;&#x2F;span&gt;&lt;span&gt;    B --&amp;gt; D[Notification Service]
&lt;&#x2F;span&gt;&lt;span&gt;    B --&amp;gt; E[Data Processing Service]
&lt;&#x2F;span&gt;&lt;span&gt;    D --&amp;gt; F[User Database]
&lt;&#x2F;span&gt;&lt;span&gt;    E --&amp;gt; G[Data Storage]
&lt;&#x2F;span&gt;&lt;span&gt;    subgraph Kubernetes Cluster
&lt;&#x2F;span&gt;&lt;span&gt;    D
&lt;&#x2F;span&gt;&lt;span&gt;    E
&lt;&#x2F;span&gt;&lt;span&gt;    F
&lt;&#x2F;span&gt;&lt;span&gt;    G
&lt;&#x2F;span&gt;&lt;span&gt;    end
&lt;&#x2F;span&gt;&lt;span&gt;```
&lt;&#x2F;span&gt;&lt;&#x2F;code&gt;&lt;&#x2F;pre&gt;
&lt;&#x2F;details&gt;
&lt;h4 id=&quot;pipeline&quot;&gt;Pipeline&lt;&#x2F;h4&gt;
&lt;p&gt;Pipeline architecture involves breaking down an application into a series of stages, where each stage processes the data and passes it to the next one. This architecture is suitable for data-intensive applications or workflows. Kubernetes can be a good fit for managing pipeline applications when individual stages can be deployed as independent services, allowing for efficient management, scaling, and deployment.&lt;&#x2F;p&gt;
&lt;p&gt;In this data processing pipeline example, data is ingested, transformed, enriched, and stored within a Kubernetes cluster. Each stage in the pipeline is a separate service managed by Kubernetes.&lt;&#x2F;p&gt;
&lt;!-- &lt;img src=&quot;..&#x2F;diagrams&#x2F;when-not-to-use-k8s&#x2F;kubernetes.excalidraw.png&quot; width=&quot;100%&quot; height=&quot;100%&quot;&gt; --&gt;
&lt;p&gt;{{&amp;lt; image src=“static&#x2F;pipeline.excalidraw.png” caption=“microservices Architecture” alt=“Shows common Pipeline Architecture.” width=“100%” &amp;gt;}}&lt;&#x2F;p&gt;
&lt;!-- [![Pipeline Architecture](the_archives&#x2F;archives&#x2F;when-not-to-use-k8s&#x2F;static&#x2F;pipeline.excalidraw.png) --&gt;
&lt;pre data-lang=&quot;mermaid&quot; style=&quot;background-color:#2b303b;color:#c0c5ce;&quot; class=&quot;language-mermaid &quot;&gt;&lt;code class=&quot;language-mermaid&quot; data-lang=&quot;mermaid&quot;&gt;&lt;span&gt;%%{init: {&amp;#39;theme&amp;#39;: &amp;#39;pastel&amp;#39;, &amp;quot;flowchart&amp;quot; : { &amp;quot;curve&amp;quot; : &amp;quot;basis&amp;quot; } } }%%
&lt;&#x2F;span&gt;&lt;span&gt;graph LR
&lt;&#x2F;span&gt;&lt;span&gt;    A[Data Source] --&amp;gt; B[Data Ingestion]
&lt;&#x2F;span&gt;&lt;span&gt;    B --&amp;gt; C[Data Transformation]
&lt;&#x2F;span&gt;&lt;span&gt;    C --&amp;gt; D[Data Enrichment]
&lt;&#x2F;span&gt;&lt;span&gt;    D --&amp;gt; E[Data Storage]
&lt;&#x2F;span&gt;&lt;span&gt;    subgraph Kubernetes Cluster
&lt;&#x2F;span&gt;&lt;span&gt;    B
&lt;&#x2F;span&gt;&lt;span&gt;    C
&lt;&#x2F;span&gt;&lt;span&gt;    D
&lt;&#x2F;span&gt;&lt;span&gt;    E
&lt;&#x2F;span&gt;&lt;span&gt;    end
&lt;&#x2F;span&gt;&lt;&#x2F;code&gt;&lt;&#x2F;pre&gt;
&lt;h2 id=&quot;scenarios-where-kubernetes-may-not-be-the-best-choice&quot;&gt;Scenarios Where Kubernetes May Not Be the Best Choice&lt;&#x2F;h2&gt;
&lt;h3 id=&quot;legacy-applications&quot;&gt;Legacy Applications&lt;&#x2F;h3&gt;
&lt;p&gt;First up for hard situations where Kubernetes may not be the best fit is when dealing with legacy applications. Integrating older applications with new technology like Kubernetes can be challenging and may require significant effort and investment. Legacy applications are typically monolithic and have complex dependencies and architectures that can make them challenging to immediately containerize and integrate with modern technologies like ECS, Kubernetes, or Serverless.&lt;&#x2F;p&gt;
&lt;p&gt;Legacy applications can require significant effort and investment to retrofit legacy applications for containerization, which may not be cost-effective. Even if they are containerized, their monolithic nature and complex dependencies can make it challenging to integrate with Kubernetes. For instance, legacy applications may have baked-in ideas about scaling mechanisms, logging, storage, or cross-instance communication that do not align well with Kubernetes’ principles. In such cases, it might be more practical to use existing deployment methods that better suit the needs of these applications.&lt;&#x2F;p&gt;
&lt;p&gt;If necessary apply the Strangler Pattern (aka “Strangler Fig” or  “Vine”) to modernize a legacy applications it can be a lengthy ordeal. As usually a facade is propped up and new elements are developed around or on top of the existing legacy system allowing both to be run in parallel before the legacy elements are eventually migrated&#x2F;swapped out.&lt;&#x2F;p&gt;
&lt;h3 id=&quot;small-teams&quot;&gt;Small Teams&lt;&#x2F;h3&gt;
&lt;p&gt;Kubernetes can also be a poor fit for organizations with limited available resources. As the infrastructure, expertise, and resources required to run and maintain a Kubernetes cluster can be substantial.CNCF favors a reliance on an open-source ecosystem that encourages piecemeal design. With Kubernetes there is a plethora of third-party integrations, services, operators that all have varying opinions on how to achieve an outcome. Often, I have found myself taking detours to resolve hiccups of the functionality of the platform I took for granted(looking at your breaking changes Containerd).  A small team may not be able to keep pace with the wack-o-mole process of fixing breaking changes that come up.&lt;&#x2F;p&gt;
&lt;p&gt;This ever expanding selection may be a less attractive option for organizations that want design for simplicity &amp;amp; not encourage flexibility. Managed Kubernetes services like Amazon EKS can help to reduce the expertise needed to set up and maintain a Kubernetes cluster. However, organizations still need to have a good understanding of Kubernetes concepts and best practices to effectively manage their applications, networking, security, and storage in the cluster.&lt;&#x2F;p&gt;
&lt;h3 id=&quot;steep-learning-curve&quot;&gt;Steep Learning Curve&lt;&#x2F;h3&gt;
&lt;!-- &lt;img src=&quot;..&#x2F;..&#x2F;assets&#x2F;images&#x2F;posts&#x2F;when-not-to-use-k8s&#x2F;learning.png&quot; width=&quot;40%&quot; height=&quot;25%&quot; align=&quot;left&quot;&gt; --&gt;
&lt;p&gt;Moreover, the learning curve for Kubernetes can be steep, and it may not be the best choice for organizations with limited IT staff or developers. The platform requires significant expertise in areas such as networking, security, storage, and CI&#x2F;CD to be effectively deployed and managed. This expertise can be challenging to acquire and may not be feasible due to budgets, staffing, or available domain knowledge. Additionally, Kubernetes is a rapidly evolving platform that introduces breaking changes with each new release, requiring dedicated resources to keep up with the latest developments in the space.&lt;&#x2F;p&gt;
&lt;h3 id=&quot;regulatory-and-cost-constraints&quot;&gt;Regulatory and Cost Constraints&lt;&#x2F;h3&gt;
&lt;p&gt;Kubernetes may not be the ideal choice for organizations with stringent regulatory or cost constraints, such as financial institutions or healthcare organizations. The platform’s security features and network policies are inherently complex, necessitating substantial expertise for proper configuration and validation to meet compliance requirements. Organizations with strict regulations may find it difficult to grasp how services are secured, particularly when alternative services are available that rely on a cloud provider’s shared responsibility model to assume most of the responsibility for securing systems.&lt;&#x2F;p&gt;
&lt;p&gt;Moreover, the cost of operating a Kubernetes cluster can be considerable. For instance, consider an EKS cluster running a service continuously, with the container image updated monthly for security reasons. Using the AWS Cost Calculator, the EKS control plane costs $73 per month. In addition to this, clusters require nodes for computation. Running a single service for a month using Fargate would cost around $270.33, while running comparable nodes (m6g.xlarge) on EC2 instances provides more flexibility in cost control, with pricing options ranging from approximately $50 to $112 per month. These price scenarios result in annual costs ranging from ~$1,500 to around ~$5,000 per cluster, excluding expenses for VPC, DNS, databases, additional clusters, or logging and monitoring. For organizations with limited budgets or cost-consciousness, adopting a serverless architecture or running services on dedicated EC2 Spot Instances can be more cost-effective solutions, as they offer the ability to scale to zero when not in use.&lt;&#x2F;p&gt;
&lt;p&gt;Kubernetes may not be the ideal choice for organizations with stringent regulatory or cost constraints, such as financial institutions or healthcare organizations. The platform’s security features and network policies are inherently complex, necessitating substantial expertise for proper configuration and validation to meet compliance requirements. Kubernetes is not secure by default, so security practices are needed to keep everything secure. Paired with a large open-source ecosystem there may be a need to hand-roll container images create custom, secure, and optimized containers tailored to specific security posture. This could include services you might not have thought of that are bundled with Kubernetes. Organizations with strict regulations may find it difficult to grasp how services are secured, particularly when alternative services are available that rely on a cloud provider’s shared responsibility model to assume most of the responsibility for securing systems.&lt;&#x2F;p&gt;
&lt;p&gt;Moreover, the cost of operating a Kubernetes cluster can be considerable. For instance, consider an EKS cluster running a service continuously, with the container image updated monthly for security reasons. Using the AWS Cost Calculator, the EKS control plane costs $73 per month. In addition to this, clusters require nodes for computation. Running a single service for a month using Fargate would cost around $270.33, while running comparable nodes (m6g.xlarge) on EC2 instances provides more flexibility in cost control, with pricing options ranging from approximately $50 to $112 per month. These price scenarios result in annual costs ranging from ~$1,500 to around ~$5,000 per cluster, excluding expenses for VPC, DNS, databases, additional clusters, or logging and monitoring. For organizations with limited budgets or cost-consciousness, adopting a serverless architecture or running services on dedicated EC2 Spot Instances can be more cost-effective solutions, as they offer the ability to scale to zero when not in use.
**&lt;&#x2F;p&gt;
&lt;p&gt;By considering alternatives like AWS Lambda, Docker Compose, and AWS Fargate, developers and organizations can choose the most appropriate solution for their unique circumstances. Real-life examples of applications that have successfully adopted these alternatives demonstrate the importance of selecting the right container orchestration platform based on individual requirements. Ultimately, making an informed decision can lead to more efficient development, deployment, and management of containerized applications, while also optimizing costs and resources.&lt;&#x2F;p&gt;
&lt;p&gt;&lt;a href=&quot;https:&#x2F;&#x2F;medium.com&#x2F;nerd-for-tech&#x2F;software-architecture-for-the-cloud-c9226150c1f3&quot;&gt;Software Architecture for the Cloud&lt;&#x2F;a&gt; by Dick Dowdell&lt;&#x2F;p&gt;
&lt;h2 id=&quot;architecture-types-that-are-not-well-suited-for-kubernetes&quot;&gt;Architecture Types that are not well suited for Kubernetes&lt;&#x2F;h2&gt;
&lt;h3 id=&quot;n-tier&quot;&gt;N-Tier&lt;&#x2F;h3&gt;
&lt;!-- &lt;img src=&quot;..&#x2F;..&#x2F;assets&#x2F;images&#x2F;posts&#x2F;when-not-to-use-k8s&#x2F;n-tier.excalidraw.png&quot; width=&quot;15%&quot; height=&quot;10%&quot; align=&quot;right&quot;&gt; --&gt;
&lt;p&gt;N-tier architecture involves breaking down an application into a series of tiers, where each tier has a specific responsibility (e.g., presentation, business logic, data storage). While Kubernetes can be used to manage the lifecycle, scaling, and deployment of each tier, it is not inherently designed for n-tier architectures. However, it can still be a viable option for managing n-tier applications when the tiers are designed as independent services.&lt;&#x2F;p&gt;
&lt;p&gt;In this traditional n-tier web application example, the Web Server, Application Server, and Database Server are deployed on separate virtual machines (VMs) or physical servers. This monolithic architecture is less suitable for Kubernetes.&lt;&#x2F;p&gt;
&lt;!-- &lt;img src=&quot;..&#x2F;diagrams&#x2F;when-not-to-use-k8s&#x2F;n-tier.excalidraw.png&quot; width=&quot;100%&quot; height=&quot;100%&quot;&gt; --&gt;
&lt;p&gt;{{&amp;lt; image src=“static&#x2F;n-tier.excalidraw.png” caption=“N-Tier Architecture” alt=“Shows common N-Tier Architecture.” width=“100%” &amp;gt;}}&lt;&#x2F;p&gt;
&lt;!-- [![N-Tier Architecture](the_archives&#x2F;archives&#x2F;when-not-to-use-k8s&#x2F;static&#x2F;n-tier.excalidraw.png) --&gt;
&lt;pre data-lang=&quot;mermaid&quot; style=&quot;background-color:#2b303b;color:#c0c5ce;&quot; class=&quot;language-mermaid &quot;&gt;&lt;code class=&quot;language-mermaid&quot; data-lang=&quot;mermaid&quot;&gt;&lt;span&gt;%%{init: {&amp;#39;theme&amp;#39;: &amp;#39;pastel&amp;#39;, &amp;quot;flowchart&amp;quot; : { &amp;quot;curve&amp;quot; : &amp;quot;basis&amp;quot; } } }%%
&lt;&#x2F;span&gt;&lt;span&gt;graph TD;
&lt;&#x2F;span&gt;&lt;span&gt;    A[Web Browser] --&amp;gt; B[WAF]
&lt;&#x2F;span&gt;&lt;span&gt;    B[WAF] --&amp;gt; C[Web Server]
&lt;&#x2F;span&gt;&lt;span&gt;    C --&amp;gt; D[Application Server]
&lt;&#x2F;span&gt;&lt;span&gt;    F --&amp;gt; E[Caching Server]
&lt;&#x2F;span&gt;&lt;span&gt;    D --&amp;gt; E
&lt;&#x2F;span&gt;&lt;span&gt;    D --&amp;gt; F[Database Server]
&lt;&#x2F;span&gt;&lt;span&gt;    subgraph VMs or Physical Servers
&lt;&#x2F;span&gt;&lt;span&gt;        B
&lt;&#x2F;span&gt;&lt;span&gt;        C
&lt;&#x2F;span&gt;&lt;span&gt;        D
&lt;&#x2F;span&gt;&lt;span&gt;        E
&lt;&#x2F;span&gt;&lt;span&gt;        F
&lt;&#x2F;span&gt;&lt;span&gt;    end
&lt;&#x2F;span&gt;&lt;&#x2F;code&gt;&lt;&#x2F;pre&gt;
&lt;h3 id=&quot;microkernel&quot;&gt;Microkernel&lt;&#x2F;h3&gt;
&lt;p&gt;Microkernel architecture is over 50 years old it involves building an application with a minimal core system and a set of plug-ins or modules that extend its functionality. This architecture is not inherently designed for Kubernetes, as the focus is on a minimal core rather than independent services. However, if the plug-ins or modules can be containerized and managed independently, Kubernetes could be used for managing their lifecycle, scaling, and deployment.&lt;&#x2F;p&gt;
&lt;p&gt;In this microkernel-based monitoring system example, the Core System communicates with multiple plugins (Reporting, Analytics, Monitoring). The tight coupling between the core and plugins makes this architecture less suitable for Kubernetes. Web Browsers like Firefox, Chrome, and Safari and IDEs like EclipseIDE or VScode are examples of microkernel-based applications where capabilities are added through plug-ins or extensions.&lt;&#x2F;p&gt;
&lt;!-- &lt;img src=&quot;..&#x2F;diagrams&#x2F;when-not-to-use-k8s&#x2F;microkernel.excalidraw.png&quot; width=&quot;100%&quot; height=&quot;100%&quot;&gt; --&gt;
&lt;p&gt;{{&amp;lt; image src=“static&#x2F;microkernel.excalidraw.png” caption=“Microkernel Architecture” alt=“Shows common Microkernel Architecture.” width=“100%” &amp;gt;}}&lt;&#x2F;p&gt;
&lt;!-- [![microkernel Architecture](static&#x2F;microkernel.excalidraw.png) --&gt;
&lt;pre data-lang=&quot;mermaid&quot; style=&quot;background-color:#2b303b;color:#c0c5ce;&quot; class=&quot;language-mermaid &quot;&gt;&lt;code class=&quot;language-mermaid&quot; data-lang=&quot;mermaid&quot;&gt;&lt;span&gt;%%{init: {&amp;#39;theme&amp;#39;: &amp;#39;pastel&amp;#39;, &amp;quot;flowchart&amp;quot; : { &amp;quot;curve&amp;quot; : &amp;quot;basis&amp;quot; } } }%%
&lt;&#x2F;span&gt;&lt;span&gt;graph LR
&lt;&#x2F;span&gt;&lt;span&gt;    A[Core System] --&amp;gt; B[Plugin 1: Reporting]
&lt;&#x2F;span&gt;&lt;span&gt;    A --&amp;gt; C[Plugin 2: Analytics]
&lt;&#x2F;span&gt;&lt;span&gt;    A --&amp;gt; D[Plugin 3: Monitoring]
&lt;&#x2F;span&gt;&lt;span&gt;    subgraph VMs or Physical Servers
&lt;&#x2F;span&gt;&lt;span&gt;    A
&lt;&#x2F;span&gt;&lt;span&gt;    B
&lt;&#x2F;span&gt;&lt;span&gt;    C
&lt;&#x2F;span&gt;&lt;span&gt;    D
&lt;&#x2F;span&gt;&lt;span&gt;    end
&lt;&#x2F;span&gt;&lt;&#x2F;code&gt;&lt;&#x2F;pre&gt;
&lt;h3 id=&quot;service-oriented&quot;&gt;Service-Oriented&lt;&#x2F;h3&gt;
&lt;p&gt;Service-oriented architecture involves building an application as a collection of loosely coupled services. These services can be deployed and scaled independently, making the application more scalable, resilient, and easy to maintain. Kubernetes is a good fit for service-oriented applications when they are designed with microservices or when the application components need to be scaled independently. Kubernetes can manage the lifecycle of the application and its services, as well as scale the application to meet demand.&lt;&#x2F;p&gt;
&lt;p&gt;In this diagram, the API Gateway routes requests to appropriate services. Each service is responsible for a specific domain within the e-commerce application. Services are connected through the Enterprise Service Bus (ESB), which handles communication and integration between them. Each service has its own dedicated database for managing its domain-specific data with CRUD (create,read,update, and delete)operations and is connected to the ESB.&lt;&#x2F;p&gt;
&lt;!-- &lt;img src=&quot;..&#x2F;diagrams&#x2F;when-not-to-use-k8s&#x2F;service-oriented.excalidraw.png&quot; width=&quot;100%&quot; height=&quot;100%&quot;&gt; --&gt;
&lt;p&gt;{{&amp;lt; image src=“static&#x2F;microkernel.excalidraw.png” caption=“Microkernel Architecture” alt=“Shows common Microkernel Architecture.” width=“100%” &amp;gt;}}&lt;&#x2F;p&gt;
&lt;!-- [![service-oriented Architecture](static&#x2F;service-oriented.excalidraw.png) --&gt;
&lt;pre data-lang=&quot;mermaid&quot; style=&quot;background-color:#2b303b;color:#c0c5ce;&quot; class=&quot;language-mermaid &quot;&gt;&lt;code class=&quot;language-mermaid&quot; data-lang=&quot;mermaid&quot;&gt;&lt;span&gt;%%{init: {&amp;#39;theme&amp;#39;: &amp;#39;pastel&amp;#39;, &amp;quot;flowchart&amp;quot; : { &amp;quot;curve&amp;quot; : &amp;quot;basis&amp;quot; } } }%%
&lt;&#x2F;span&gt;&lt;span&gt;graph LR
&lt;&#x2F;span&gt;&lt;span&gt;    A[Web Frontend] --&amp;gt; B[API Gateway]
&lt;&#x2F;span&gt;&lt;span&gt;    B --&amp;gt; C[Customer Management Service]
&lt;&#x2F;span&gt;&lt;span&gt;    B --&amp;gt; D[Inventory Management Service]
&lt;&#x2F;span&gt;&lt;span&gt;    B --&amp;gt; E[Order Processing Service]
&lt;&#x2F;span&gt;&lt;span&gt;    B --&amp;gt; F[Shipping Service]
&lt;&#x2F;span&gt;&lt;span&gt;    B --&amp;gt; G[Payment Service]
&lt;&#x2F;span&gt;&lt;span&gt;    B --&amp;gt; H[Notification Service]
&lt;&#x2F;span&gt;&lt;span&gt;    C --&amp;gt;|CRUD| I[Customer Database]
&lt;&#x2F;span&gt;&lt;span&gt;    D --&amp;gt;|CRUD| J[Inventory Database]
&lt;&#x2F;span&gt;&lt;span&gt;    E --&amp;gt;|CRUD| K[Orders Database]
&lt;&#x2F;span&gt;&lt;span&gt;    F --&amp;gt;|CRUD| L[Shipping Database]
&lt;&#x2F;span&gt;&lt;span&gt;    G --&amp;gt;|CRUD| M[Payment Database]
&lt;&#x2F;span&gt;&lt;span&gt;    H --&amp;gt;|CRUD| N[Notification Database]
&lt;&#x2F;span&gt;&lt;span&gt;    subgraph ESB[Enterprise Service Bus]
&lt;&#x2F;span&gt;&lt;span&gt;    C
&lt;&#x2F;span&gt;&lt;span&gt;    D
&lt;&#x2F;span&gt;&lt;span&gt;    E
&lt;&#x2F;span&gt;&lt;span&gt;    F
&lt;&#x2F;span&gt;&lt;span&gt;    G
&lt;&#x2F;span&gt;&lt;span&gt;    H
&lt;&#x2F;span&gt;&lt;span&gt;    end
&lt;&#x2F;span&gt;&lt;&#x2F;code&gt;&lt;&#x2F;pre&gt;
&lt;h3 id=&quot;monolithic&quot;&gt;Monolithic&lt;&#x2F;h3&gt;
&lt;p&gt;Monolithic architecture refers to an application design pattern where the entire application is built as a single, cohesive unit. All the functionalities, including UI, business logic, and data management, are bundled together into one codebase, and the application is deployed as a single unit. Monolithic applications are tightly coupled and can be more challenging to maintain, update, or scale.&lt;&#x2F;p&gt;
&lt;!-- &lt;img src=&quot;..&#x2F;diagrams&#x2F;when-not-to-use-k8s&#x2F;monolithic.excalidraw.png&quot; width=&quot;100%&quot; height=&quot;100%&quot;&gt; --&gt;
&lt;p&gt;{{&amp;lt; image src=“static&#x2F;monolithic.excalidraw.png” caption=“Monolithic Architecture” alt=“Shows common Monolithic Architecture.” width=“100%” &amp;gt;}}&lt;&#x2F;p&gt;
&lt;!-- [![Monolithic Architecture](static&#x2F;monolithic.excalidraw.png) --&gt;
&lt;pre data-lang=&quot;mermaid&quot; style=&quot;background-color:#2b303b;color:#c0c5ce;&quot; class=&quot;language-mermaid &quot;&gt;&lt;code class=&quot;language-mermaid&quot; data-lang=&quot;mermaid&quot;&gt;&lt;span&gt;%%{init: {&amp;#39;theme&amp;#39;: &amp;#39;pastel&amp;#39;, &amp;quot;flowchart&amp;quot; : { &amp;quot;curve&amp;quot; : &amp;quot;basis&amp;quot; } } }%%
&lt;&#x2F;span&gt;&lt;span&gt;graph LR
&lt;&#x2F;span&gt;&lt;span&gt;    A[Web Browser] --&amp;gt; B
&lt;&#x2F;span&gt;&lt;span&gt;    B[&amp;quot;E-commerce Application&amp;lt;br&amp;gt;(Product Catalog, Shopping Cart,&amp;lt;br&amp;gt;Payment, User Management)&amp;quot;]
&lt;&#x2F;span&gt;&lt;span&gt;    subgraph VMs or Physical Servers
&lt;&#x2F;span&gt;&lt;span&gt;    B
&lt;&#x2F;span&gt;&lt;span&gt;    end
&lt;&#x2F;span&gt;&lt;&#x2F;code&gt;&lt;&#x2F;pre&gt;
&lt;p&gt;It may be worth considering alternatives like AWS Lambda, Docker Compose, and AWS Fargate, but its should be up to the digression of the developers and organizations to choose the most appropriate solution for their unique circumstances. Real-life examples of applications that have successfully adopted these alternatives demonstrate the importance of selecting the right container orchestration platform based on individual requirements. Ultimately, making an informed decision can lead to more efficient development, deployment, and management of containerized applications, while also optimizing costs and resources.&lt;&#x2F;p&gt;
&lt;h2 id=&quot;tldr&quot;&gt;TLDR;&lt;&#x2F;h2&gt;
&lt;ul&gt;
&lt;li&gt;Kubernetes is a powerful and versatile container orchestration platform that is particularly useful for large-scale applications with high traffic volume, where it provides powerful management, scaling, and deployment capabilities. However, for small-scale or simple applications, there may be more cost-effective and simpler solutions available.&lt;&#x2F;li&gt;
&lt;li&gt;When dealing with legacy applications, integrating with Kubernetes can be challenging and may require significant effort and investment. It may be more feasible to stick to existing deployment methods or to use a different container orchestration platform that better suits the needs of these applications.&lt;&#x2F;li&gt;
&lt;li&gt;Kubernetes requires significant expertise in areas such as networking, security, storage, and deployment to be effectively deployed and managed. This expertise can be challenging to acquire and may not be feasible due to budgets, staffing, or available domain knowledge.&lt;&#x2F;li&gt;
&lt;li&gt;Kubernetes may not be the best choice for organizations with strict regulatory requirements, where it can likely be easier to lean on a Cloud provider’s shared responsibility model to take on some of the operational responsibilities in securing systems.&lt;&#x2F;li&gt;
&lt;&#x2F;ul&gt;
&lt;h2 id=&quot;resources&quot;&gt;Resources&lt;&#x2F;h2&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href=&quot;https:&#x2F;&#x2F;medium.com&#x2F;nerd-for-tech&#x2F;software-architecture-for-the-cloud-c9226150c1f3&quot;&gt;Software Architecture for the Cloud&lt;&#x2F;a&gt;&lt;&#x2F;li&gt;
&lt;li&gt;&lt;a href=&quot;https:&#x2F;&#x2F;doyouneedkubernetes.com&#x2F;&quot;&gt;do you need kubernetes?&lt;&#x2F;a&gt;&lt;&#x2F;li&gt;
&lt;li&gt;&lt;a href=&quot;https:&#x2F;&#x2F;kind.sigs.k8s.io&#x2F;&quot;&gt;kind&lt;&#x2F;a&gt;&lt;&#x2F;li&gt;
&lt;li&gt;&lt;a href=&quot;https:&#x2F;&#x2F;k3s.io&#x2F;&quot;&gt;k3s&lt;&#x2F;a&gt;&lt;&#x2F;li&gt;
&lt;li&gt;&lt;a href=&quot;https:&#x2F;&#x2F;minikube.sigs.k8s.io&#x2F;&quot;&gt;minikube&lt;&#x2F;a&gt;&lt;&#x2F;li&gt;
&lt;&#x2F;ul&gt;
&lt;iframe
    width=&quot;320&quot;
    height=&quot;280&quot;
    src=&quot;https:&#x2F;&#x2F;www.youtube.com&#x2F;embed&#x2F;RRykwyJkOIw&quot;
    frameborder=&quot;0&quot;
    allow=&quot;autoplay; encrypted-media&quot;
    allowfullscreen
&gt;
&lt;&#x2F;iframe&gt;
</description>
      </item>
      <item>
          <title>Pi Hole</title>
          <pubDate>Tue, 27 Apr 2021 19:30:46 -0600</pubDate>
          <author>Unknown</author>
          <link>https://kcirtapfromspace.github.io/kcirtap-blog/posts/pi-hole/</link>
          <guid>https://kcirtapfromspace.github.io/kcirtap-blog/posts/pi-hole/</guid>
          <description xml:base="https://kcirtapfromspace.github.io/kcirtap-blog/posts/pi-hole/">&lt;h1 id=&quot;pi-hole&quot;&gt;Pi Hole&lt;&#x2F;h1&gt;
&lt;p&gt;For a while I was running PiHole on a raspberry pi64.
Installed docker, docker-compose, git
Configured running raspberry pi with docker
https:&#x2F;&#x2F;github.com&#x2F;pi-hole&#x2F;docker-pi-hole#readme&lt;&#x2F;p&gt;
&lt;p&gt;docker-compose.yaml&lt;&#x2F;p&gt;
&lt;pre style=&quot;background-color:#2b303b;color:#c0c5ce;&quot;&gt;&lt;code&gt;&lt;span&gt;version: &amp;quot;3&amp;quot;
&lt;&#x2F;span&gt;&lt;span&gt;
&lt;&#x2F;span&gt;&lt;span&gt;# More info at https:&#x2F;&#x2F;github.com&#x2F;pi-hole&#x2F;docker-pi-hole&#x2F; and https:&#x2F;&#x2F;docs.pi-hole.net&#x2F;
&lt;&#x2F;span&gt;&lt;span&gt;services:
&lt;&#x2F;span&gt;&lt;span&gt;  pihole:
&lt;&#x2F;span&gt;&lt;span&gt;    container_name: pihole
&lt;&#x2F;span&gt;&lt;span&gt;    image: pihole&#x2F;pihole:latest
&lt;&#x2F;span&gt;&lt;span&gt;    # For DHCP it is recommended to remove these ports and instead add: network_mode: &amp;quot;host&amp;quot;
&lt;&#x2F;span&gt;&lt;span&gt;    ports:
&lt;&#x2F;span&gt;&lt;span&gt;      - &amp;quot;53:53&#x2F;tcp&amp;quot;
&lt;&#x2F;span&gt;&lt;span&gt;      - &amp;quot;53:53&#x2F;udp&amp;quot;
&lt;&#x2F;span&gt;&lt;span&gt;      - &amp;quot;67:67&#x2F;udp&amp;quot; # Only required if you are using Pi-hole as your DHCP server
&lt;&#x2F;span&gt;&lt;span&gt;      - &amp;quot;80:80&#x2F;tcp&amp;quot;
&lt;&#x2F;span&gt;&lt;span&gt;    environment:
&lt;&#x2F;span&gt;&lt;span&gt;      TZ: &amp;#39;America&#x2F;Denver&amp;#39;
&lt;&#x2F;span&gt;&lt;span&gt;      PIHOLE_DNS_: 127.0.0.1#5053
&lt;&#x2F;span&gt;&lt;span&gt;      TEMPERATUREUNIT: f
&lt;&#x2F;span&gt;&lt;span&gt;      DNSSEC: &amp;#39;true&amp;#39;
&lt;&#x2F;span&gt;&lt;span&gt;      ADMIN_EMAIL: 
&lt;&#x2F;span&gt;&lt;span&gt;
&lt;&#x2F;span&gt;&lt;span&gt;      # WEBPASSWORD: &amp;#39;set a secure password here or it will be random&amp;#39;
&lt;&#x2F;span&gt;&lt;span&gt;    # Volumes store your data between container upgrades
&lt;&#x2F;span&gt;&lt;span&gt;    volumes:
&lt;&#x2F;span&gt;&lt;span&gt;      - &amp;#39;.&#x2F;etc-pihole:&#x2F;etc&#x2F;pihole&amp;#39;
&lt;&#x2F;span&gt;&lt;span&gt;      - &amp;#39;.&#x2F;etc-dnsmasq.d:&#x2F;etc&#x2F;dnsmasq.d&amp;#39;
&lt;&#x2F;span&gt;&lt;span&gt;    #   https:&#x2F;&#x2F;github.com&#x2F;pi-hole&#x2F;docker-pi-hole#note-on-capabilities
&lt;&#x2F;span&gt;&lt;span&gt;    cap_add:
&lt;&#x2F;span&gt;&lt;span&gt;      - NET_ADMIN # Recommended but not required (DHCP needs NET_ADMIN)
&lt;&#x2F;span&gt;&lt;span&gt;    restart: unless-stopped
&lt;&#x2F;span&gt;&lt;&#x2F;code&gt;&lt;&#x2F;pre&gt;
&lt;p&gt;Configured piHole with DoH filtering DNS through Cloudflare (1.1.1.2)
&lt;a href=&quot;https:&#x2F;&#x2F;docs.pi-hole.net&#x2F;ftldns&#x2F;dns-cache&#x2F;&quot;&gt;DNS cache - Pi-hole documentation&lt;&#x2F;a&gt;&lt;&#x2F;p&gt;
&lt;p&gt;Update the Ad Lists based on recommendations
&lt;a href=&quot;https:&#x2F;&#x2F;firebog.net&#x2F;&quot;&gt;Blocklist Collection ¦ Firebog&lt;&#x2F;a&gt;&lt;&#x2F;p&gt;
&lt;p&gt;Add SSH for local hosts
&lt;a href=&quot;https:&#x2F;&#x2F;scotthelme.co.uk&#x2F;securing-dns-across-all-of-my-devices-with-pihole-dns-over-https-1-1-1-1&#x2F;&quot;&gt;Securing DNS across all of my devices with Pi-Hole + DNS-over-HTTPS + 1.1.1.1&lt;&#x2F;a&gt;&lt;&#x2F;p&gt;
</description>
      </item>
    </channel>
</rss>
