summaryrefslogtreecommitdiff
path: root/public/posts/2024-08-25-03.html
diff options
context:
space:
mode:
authoralex emery <[email protected]>2024-11-03 15:33:28 +0000
committeralex emery <[email protected]>2024-11-03 15:33:28 +0000
commit508527f52de524a4fd174d386808e314b4138b11 (patch)
tree2593af258b67decbf0207e2547b7ea55f6b051d7 /public/posts/2024-08-25-03.html
parent22bfae8f9637633d5608caad3ce56b64c6819505 (diff)
feat: static builds
Diffstat (limited to 'public/posts/2024-08-25-03.html')
-rw-r--r--public/posts/2024-08-25-03.html78
1 files changed, 78 insertions, 0 deletions
diff --git a/public/posts/2024-08-25-03.html b/public/posts/2024-08-25-03.html
new file mode 100644
index 0000000..07e48a0
--- /dev/null
+++ b/public/posts/2024-08-25-03.html
@@ -0,0 +1,78 @@
+
+<!DOCTYPE html>
+<html lang="en">
+
+<head>
+ <meta http-equiv="Content-Type" content="text/html; charset=utf-8" />
+ <meta name="description" content="Home for a73x" />
+ <meta name="author" content="a73x" />
+ <meta name="viewport"
+ content="user-scalable=no, initial-scale=1.0, maximum-scale=1.0, minimum-scale=1.0, width=device-width" />
+ <title>a73x</title>
+ <link rel="stylesheet" href="/static/styles.css">
+ <link rel="stylesheet" href="/static/syntax.css">
+</head>
+
+<body>
+ <h1>a73x</h1>
+ <sub>high effort, low reward</sub>
+ <nav class="nav">
+ <ul>
+
+
+ <li><a class="no-decorations" href="/">home</a></li>
+
+
+
+ <li><a class="no-decorations" href="/posts">posts</a></li>
+
+
+
+ <li><a class="no-decorations" href="/ethos">ethos</a></li>
+
+
+ </ul>
+ </nav>
+
+<h1>Levels of Optimisation</h1>
+<nav>
+
+<ul>
+<li><a href="#benchmark-optimisation">Benchmark Optimisation</a></li>
+
+<li><a href="#profile-guided-optimisation">Profile guided optimisation</a></li>
+
+<li><a href="#runtime-optimisation">Runtime optimisation</a></li>
+</ul>
+
+</nav>
+<p>This probably isn&rsquo;t strictly true, but it makes sense to me.
+We&rsquo;ve got three levels of &ldquo;optimisation&rdquo; (assuming your actual design doesn&rsquo;t suck and needs optimising).</p>
+
+<h1 id="benchmark-optimisation">Benchmark Optimisation</h1>
+
+<p>To begin with, we have benchmark optimisation; you create a benchmark locally, dump a profile of it, and optimise it. Then, you run your tests because the most optimal solution is &ldquo;return nil&rdquo; and make sure you didn&rsquo;t break your tests.
+This is the first and easiest optimisation because it only requires a function, nothing else, and can be done in isolation. You don&rsquo;t need a working &ldquo;application&rdquo; here, just the function you&rsquo;re trying to benchmark. There are different types of benchmarks, micro, macro, etc., but I&rsquo;m leaving them out of scope for this conversation. Go read <a href="https://learning.oreilly.com/library/view/efficient-go/9781098105709/">Efficient Go</a>.</p>
+
+<h1 id="profile-guided-optimisation">Profile guided optimisation</h1>
+
+<p>This is a mild step up from benchmark optimisation only because you need a live server load from which you use to pull a profile, but it is probably the most hands-off step. You import the <code>net/http/pprof</code> package into your service, call the <code>debug/profile?seconds=30</code> to get a profile, and compile your binary with <code>go build -pgo=profile.pgo</code>. The compiler will make optimisations for you, and even if your profile is garbage, it shouldn&rsquo;t cause any regressions.</p>
+
+<p>You probably want to get a few profiles and merge them using <code>go tool pprof -proto a.out b.out &gt; merged</code>. This will help provide optimisations that are more relevant to your overall system; instead of just a single 30s slice.
+Also, if you have long-running calls that are chained together, a 30-second snapshot might not be enough, so try a sample with a longer window.</p>
+
+<h1 id="runtime-optimisation">Runtime optimisation</h1>
+
+<p>This is where you expose <code>/runtime/metrics</code> and monitor them continuously. There&rsquo;s a list of metrics that you might be interested in, a recommended set of metrics, and generally, you are looking to optimise your interactions with the go runtime. There are a few stats here: goroutine counts, goroutines waiting to run, heap size, how often garbage collection runs, how long garbage collection takes, etc. All useful information to use when optimising - when garbage collection is running, your program ain&rsquo;t. It&rsquo;s also useful for finding memory leaks; it becomes pretty obvious you are leaking goroutines when you graph the count and just watch it go up and never down.
+It&rsquo;s also just lowkey fun to look at the exposed data and understand what your system is doing.</p>
+
+
+ <footer>
+ <br />​
+ <hr />​​​​​​​​​​​​​​​​​​​<br />​​​​​
+ <p>see something you disagree with? email: <a href="mailto:[email protected]">[email protected]</a></p>
+ </footer>
+</body>
+
+</html>
+