diff options
Diffstat (limited to 'content')
| -rw-r--r-- | content/posts/001.md | 10 | ||||
| -rw-r--r-- | content/posts/003.md | 7 | ||||
| -rw-r--r-- | content/posts/005.md | 10 |
3 files changed, 14 insertions, 13 deletions
diff --git a/content/posts/001.md b/content/posts/001.md index fe1e680..f7d520e 100644 --- a/content/posts/001.md +++ b/content/posts/001.md @@ -2,6 +2,8 @@ title: "Go Benchmarking" tags: posts --- + +The benchmark cycle: 1. write a benchmark 2. run a benchmark 3. get a profile @@ -9,17 +11,17 @@ tags: posts 5. run your tests 6. goto 2. -## cpuprofile +# cpuprofile ```shell go test -test=XXX -bench <regex> -cpuprofile <file> ``` -## memprofile +# memprofile ```shell go test -test=XXX -bench <regex> -memprofile <file> -benchmem ``` -## pprof +# pprof [pprof usage](https://github.com/google/pprof/blob/main/doc/README.md) ```shell @@ -27,7 +29,7 @@ go pprof -http=:8080 profile.pb.gz ``` will show a web UI for analysing the profile. -### views: +## views - flame graph: `localhost:8080/ui/flamegraph` - shows percentage breakdown of how much resource each "call" made. - clicking a box will make it "100%" allowing for deep diving diff --git a/content/posts/003.md b/content/posts/003.md index e19e891..0f47968 100644 --- a/content/posts/003.md +++ b/content/posts/003.md @@ -2,19 +2,20 @@ title: "Levels of Optimisation" tags: posts --- + This probably isn't strictly true, but it makes sense to me. We've got three levels of "optimisation" (assuming your actual design doesn't suck and needs optimising). -## Benchmark Optimisation +# Benchmark Optimisation To begin with, we have benchmark optimisation; you create a benchmark locally, dump a profile of it, and optimise it. Then, you run your tests because the most optimal solution is "return nil" and make sure you didn't break your tests. This is the first and easiest optimisation because it only requires a function, nothing else, and can be done in isolation. You don't need a working "application" here, just the function you're trying to benchmark. There are different types of benchmarks, micro, macro, etc., but I'm leaving them out of scope for this conversation. Go read [Efficient Go](https://learning.oreilly.com/library/view/efficient-go/9781098105709/). -## Profile guided optimisation +# Profile guided optimisation This is a mild step up from benchmark optimisation only because you need a live server load from which you use to pull a profile, but it is probably the most hands-off step. You import the `net/http/pprof` package into your service, call the `debug/profile?seconds=30` to get a profile, and compile your binary with `go build -pgo=profile.pgo`. The compiler will make optimisations for you, and even if your profile is garbage, it shouldn't cause any regressions. You probably want to get a few profiles and merge them using `go tool pprof -proto a.out b.out > merged`. This will help provide optimisations that are more relevant to your overall system; instead of just a single 30s slice. Also, if you have long-running calls that are chained together, a 30-second snapshot might not be enough, so try a sample with a longer window. -## Runtime optimisation +# Runtime optimisation This is where you expose `/runtime/metrics` and monitor them continuously. There's a list of metrics that you might be interested in, a recommended set of metrics, and generally, you are looking to optimise your interactions with the go runtime. There are a few stats here: goroutine counts, goroutines waiting to run, heap size, how often garbage collection runs, how long garbage collection takes, etc. All useful information to use when optimising - when garbage collection is running, your program ain't. It's also useful for finding memory leaks; it becomes pretty obvious you are leaking goroutines when you graph the count and just watch it go up and never down. It's also just lowkey fun to look at the exposed data and understand what your system is doing.
\ No newline at end of file diff --git a/content/posts/005.md b/content/posts/005.md index 55aa9ea..9f37c70 100644 --- a/content/posts/005.md +++ b/content/posts/005.md @@ -4,7 +4,8 @@ tags: posts --- https://pkg.go.dev/unique ->The unique package provides facilities for canonicalising ("interning") comparable values. [[1](https://pkg.go.dev/unique)] +>The unique package provides facilities for canonicalising ("interning") comparable values.[^1] +[^1]: https://pkg.go.dev/unique oh yeah, thats obvious I fully understand what this package does now, great read, tune in for the next post. @@ -31,10 +32,7 @@ With a small demo here https://go.dev/play/p/piSYjCHIcLr This implementation is fairly naive, it can only grow and it only works with strings, so naturally go's implementation is a better. It's also worth noting, that since strings are a pointer under the hood ->When comparing two strings, if the pointers are not equal, then we must compare their contents to determine equality. But if we know that two strings are canonicalized, then it _is_ sufficient to just check their pointers. [[2](https://go.dev/blog/unique)] +>When comparing two strings, if the pointers are not equal, then we must compare their contents to determine equality. But if we know that two strings are canonicalized, then it _is_ sufficient to just check their pointers.[^2] +[^2]: https://go.dev/blog/unique So to recap, goes `unique` package provides a way to reuse objects instead of creating new ones, if we consider the objects of equal value. - -## References -1. https://pkg.go.dev/unique -2. https://go.dev/blog/unique
\ No newline at end of file |
