diff options
Diffstat (limited to 'content/posts')
| -rw-r--r-- | content/posts/001.md | 36 | ||||
| -rw-r--r-- | content/posts/002.md | 117 | ||||
| -rw-r--r-- | content/posts/003.md | 20 | ||||
| -rw-r--r-- | content/posts/004.md | 251 |
4 files changed, 424 insertions, 0 deletions
diff --git a/content/posts/001.md b/content/posts/001.md new file mode 100644 index 0000000..48ea720 --- /dev/null +++ b/content/posts/001.md @@ -0,0 +1,36 @@ +--- +title: "Go Benchmarking" +tags: posts +--- +1. write a benchmark +2. run a benchmark +3. get a profile +4. optimise +5. run your tests +6. goto 2. + +## cpuprofile +`go test -test=XXX -bench <regex> -cpuprofile <file>` + +## memprofile +`go test -test=XXX -bench <regex> -memprofile <file> -benchmem` + +## pprof +[pprof usage](https://github.com/google/pprof/blob/main/doc/README.md) + +`go pprof -http=:8080 profile.pb.gz` +will show a web UI for analysing the profile. + +### views: +- flame graph: `localhost:8080/ui/flamegraph` + - shows percentage breakdown of how much resource each "call" made. + - clicking a box will make it "100%" allowing for deep diving + - right click "show source code" to view +- top: `localhost:8080/ui/top` + - shows top functions + - `flat`: profile samples in this function + - `cum`: (cumulative) profile samples in this function and its callees +- source: `localhost:8080/ui/source` + - each source line is annotated with the time spent in that source line + - the first number does not count time spent in functions called from the source line + - the second number does
\ No newline at end of file diff --git a/content/posts/002.md b/content/posts/002.md new file mode 100644 index 0000000..8e22144 --- /dev/null +++ b/content/posts/002.md @@ -0,0 +1,117 @@ +--- +title: Go Project Layouts +tags: posts +--- + +Do you lay awake at night and consider how to optimally layout your Go project? +No...? what about recommending Windows to a friend or colleague?? +yeah me either... + +I've seen a lot online that shows what I can only describe as endgame enterprise Go project layouts. These layouts are heavily folder-based and only make sense when your project has grown large enough to warrant the verbosity these folders provide. My only problem is that people often try to start there. + +A lot of design tells you to think about your project in layers. +- api +- domain +- storage + +If you read [The Clean Architecture](https://blog.cleancoder.com/uncle-bob/2012/08/13/the-clean-architecture.html) +you get told the layers should be, + +1. entities +2. use cases +3. interface adapters +4. frameworks and drivers. + +and that all dependencies should point in (yeah I know, I didn't do a circle so "in" doesn't make sense but I'm sure you can follow). + +This is an excellent idea; separation of concerns is good. + +So you make your folders. +``` +. +├── drivers +├── entities +├── interfaces +└── usecases +``` +aaand this is an awful idea. I don't even want to go further into this hypothetical layout because it hurts too much. + +Find me a project that actually creates these folders, and I'll find you the medium article titled "Clean Code in Go" which spawned it. + +The important parts of clean code, are the ideas presented, and how you apply them to a package orientated language. Creating a folder to represent each layer, doesn't really carry much weight here. + +As a package orientated language, we want to think and reason about things in terms of packages. Yes there will be a point where you may want to group your packages into some group, but that is mostly ceremonial. +Go doesn't care if you're accessing `domain/bar` or `domain/foo/bar`. Either will simply be accessed as `bar`. This means that what matters what's in that package `bar`. Since everything will be read as `bar.Thing` i.e `import bytes` and `bytes.Buffer`. + +So, the package name sets context and expectations. If I grab the `json` package, I expect that package to do things around `json`. I'd feel a bit confused if I was able to configure an smtp server. + +If you cannot come up with a package name that’s a meaningful prefix for the package’s contents, the package abstraction boundary may be wrong + +"but you've still not provided a good example?" +well +yes + +I think the project should grow organically to some degree. What we want to do is write code, and refactoring in Go is fairly cheap. + +Start with a `main.go` and make a `Run` function or some equivalent which it calls. + +```go + +func Run() error { + // actual important stuff here +} +func main() { + if err := Run(); err != nil { + log.Fatal(err) + } +} +``` + +This allows you to test your run function in a unit test, and keeps your `main` func minimal. + +As your project grows, you can keep it flat inside the root directory + +``` +├── api.go +├── go.mod +├── go.sum +├── main.go +├── rss.go +└── sqlite.go +``` +Even just glancing at that, you can guess that this might be an RSS server, that uses sqlite to back it. + +Who knows what +``` +├── drivers +├── entities +├── interfaces +└── usecases +``` + +does. + +As things evolve you might want to put them in `internal` to hide them from being imported by other packages, or `cmd` as you develop multiple binaries. Placing things in `internal` means you're free to mess around with it, without breaking any public contracts other users rely on. +I can't be bothered rewriting my example, so here's a random one I found online; it's probably all right. +[Server Project](https://go.dev/doc/modules/layout#server-project) + +``` +project-root-directory/ + go.mod + internal/ + auth/ + ... + metrics/ + ... + model/ + ... + cmd/ + api-server/ + main.go + metrics-analyzer/ + main.go + ... + ... the project's other directories with non-Go code +``` + +My vague summary is that clean code gives you a north star to follow, an idea of how you want to separate and reason about the packages you create. You don't need to create the entities of abstraction that are also presented. Think about what things do or relate to and create packages for them. You should allow your project to grow organically but don't expect architecture to appear without following a north star.
\ No newline at end of file diff --git a/content/posts/003.md b/content/posts/003.md new file mode 100644 index 0000000..e19e891 --- /dev/null +++ b/content/posts/003.md @@ -0,0 +1,20 @@ +--- +title: "Levels of Optimisation" +tags: posts +--- +This probably isn't strictly true, but it makes sense to me. +We've got three levels of "optimisation" (assuming your actual design doesn't suck and needs optimising). + +## Benchmark Optimisation +To begin with, we have benchmark optimisation; you create a benchmark locally, dump a profile of it, and optimise it. Then, you run your tests because the most optimal solution is "return nil" and make sure you didn't break your tests. +This is the first and easiest optimisation because it only requires a function, nothing else, and can be done in isolation. You don't need a working "application" here, just the function you're trying to benchmark. There are different types of benchmarks, micro, macro, etc., but I'm leaving them out of scope for this conversation. Go read [Efficient Go](https://learning.oreilly.com/library/view/efficient-go/9781098105709/). + +## Profile guided optimisation +This is a mild step up from benchmark optimisation only because you need a live server load from which you use to pull a profile, but it is probably the most hands-off step. You import the `net/http/pprof` package into your service, call the `debug/profile?seconds=30` to get a profile, and compile your binary with `go build -pgo=profile.pgo`. The compiler will make optimisations for you, and even if your profile is garbage, it shouldn't cause any regressions. + +You probably want to get a few profiles and merge them using `go tool pprof -proto a.out b.out > merged`. This will help provide optimisations that are more relevant to your overall system; instead of just a single 30s slice. +Also, if you have long-running calls that are chained together, a 30-second snapshot might not be enough, so try a sample with a longer window. + +## Runtime optimisation +This is where you expose `/runtime/metrics` and monitor them continuously. There's a list of metrics that you might be interested in, a recommended set of metrics, and generally, you are looking to optimise your interactions with the go runtime. There are a few stats here: goroutine counts, goroutines waiting to run, heap size, how often garbage collection runs, how long garbage collection takes, etc. All useful information to use when optimising - when garbage collection is running, your program ain't. It's also useful for finding memory leaks; it becomes pretty obvious you are leaking goroutines when you graph the count and just watch it go up and never down. +It's also just lowkey fun to look at the exposed data and understand what your system is doing.
\ No newline at end of file diff --git a/content/posts/004.md b/content/posts/004.md new file mode 100644 index 0000000..ebfaad2 --- /dev/null +++ b/content/posts/004.md @@ -0,0 +1,251 @@ +--- +title: "Writing HTTP Handlers" +tags: posts +--- + +I'm sharing how I write handlers in Go. + +I write them like this for reasons that are probably fairly contextual. I've written a few applications and had to swap REST libraries or even swapped REST for GRPC, so things that make that easier speak to me a great deal. + +I've used `ints` instead of the `http.StatusXXXX` and omitted `JSON` tags in an attempt to try save up screen space. + +To begin with, you might have something like this: +``` +package main + +import ( + "fmt" + "log" + "net/http" +) + +func handler(w http.ResponseWriter, r *http.Request) { + fmt.Fprintf(w, "Hello, World!") +} + +func main() { + http.HandleFunc("/", handler) + log.Fatal(http.ListenAndServe(":8080", nil)) +} + +``` + +Then you might get told off because you've just registered routes with the default mux, which isn't very testable. + +So you tweak it a little bit. +``` +package main + +import ( + "fmt" + "log" + "net/http" +) + +func handler(w http.ResponseWriter, r *http.Request) { + fmt.Fprintf(w, "Hello, World!") +} + +func newMux() *http.ServeMux { + mux := http.NewServeMux() + mux.HandleFunc("/", handler) + + return mux +} + +func Run() error { + mux := newMux() + return http.ListenAndServe(":8080", mux) +} + +func main() { + if err := Run(); err != nil { + log.Fatal(err) + } +} +``` + +`newMux()` gives you a `mux` to use when testing. + +`Run` keeps `main` nice and clean, so you can just return errors as needed instead of going `log.Fatal` and just generally being messy. + +But now you need to do something real, you want to store and fetch data. + +``` +package main + +import ( + "encoding/json" + "fmt" + "log" + "net/http" + "strconv" +) + +func NewMux() *http.ServeMux { + mux := http.NewServeMux() + s := Server{ + data: make(map[int]Content), + } + + s.Register(mux) + return mux +} + +func Run() error { + mux := NewMux() + return http.ListenAndServe(":8080", mux) +} + +type Server struct { + data map[int]Content +} + +func (s *Server) Register(mux *http.ServeMux) { + mux.HandleFunc("GET /{id}", s.Get) + mux.HandleFunc("POST /", s.Post) +} + +func (s *Server) Get(w http.ResponseWriter, r *http.Request) { + idStr := r.PathValue("id") + id, err := strconv.Atoi(idStr) + if err != nil { + w.WriteHeader(400) + w.Write([]byte(fmt.Sprintf("failed to parse id: %v", err))) + return + } + data, ok := s.data[id] + if !ok { + w.WriteHeader(404) + w.Write([]byte("not found")) + return + } + w.Header().Set("Content-Type", "application/json") + w.WriteHeader(200) + json.NewEncoder(w).Encode(data) +} + +type ContentPostReq struct { + Foo string +} + +func (s *Server) Post(w http.ResponseWriter, r *http.Request) { + req := ContentPostReq{} + if err := json.NewDecoder(r.Body).Decode(&req); err != nil { + w.WriteHeader(400) + w.Write([]byte(fmt.Sprintf("failed to parse request: %v", err))) + return + } + id := len(s.data) + content := Content{ + ID: id, + Foo: req.Foo, + } + s.data[id] = content + + w.WriteHeader(200) + json.NewEncoder(w).Encode(content) +} + +type Content struct { + ID int + Foo string +} + +func main() { + if err := Run(); err != nil { + log.Fatal(err) + } +} +``` + +``` +❯ curl -X POST localhost:8080 --header "Content-Type: application/json" -d '{"foo":"bar"}' +{"ID":0,"Foo":"bar"} +❯ curl -X GET localhost:8080/0 +{"ID":0,"Foo":"bar"} +``` + +Erm, well, okay. Quite a bit has changed here, but I'm sure you can read it. We now save and fetch very, very safely from a map and return the response as `JSON`. I've done some things for brevity because I want to get to the main point. + +This API is inconsistent. It sometimes returns `JSON`, and the others return strings. Overall, it's just a mess. + +So let's try to standardise things. +First, let's design some form of REST spec. + +``` +type JSONResp[T any] struct { + Resources []T + Errs []ErrorResp +} + +type ErrorResp struct { + Status int + Msg string +} +``` +We want to be able to support fetching multiple resources at once, if we can only fetch some resources, let's return them under `resources` and show the errors under `errs` + +Now, add some helpful functions to handle things. +``` +func Post[In any, Out any](successCode int, fn func(context.Context, In) ([]Out, []ErrorResp)) func(http.ResponseWriter, *http.Request) { + return func(w http.ResponseWriter, r *http.Request) { + var v In + + if err := json.NewDecoder(r.Body).Decode(&v); err != nil { + writeJSONResp[Out](w, http.StatusBadRequest, nil, []ErrorResp{ + { + Status: http.StatusBadRequest, + Msg: fmt.Sprintf("failed to parse request: %v", err), + }, + }) + + return + } + + res, errs := fn(r.Context(), v) + writeJSONResp(w, successCode, res, errs) + } +} + +func writeJSONResp[T any](w http.ResponseWriter, successCode int, res []T, errs []ErrorResp) { + body := JSONResp[T]{ + Resources: res, + Errs: errs, + } + + status := successCode + for _, e := range errs { + if e.Status > status { + status = e.Status + } + } + w.Header().Set("Content-Type", "application/json") + w.WriteHeader(status) + json.NewEncoder(w).Encode(body) +} +``` +And we've standardised all `POST` requests! + +This function can be used by all `POST` requests, ensuring they adhere to the spec. It also removes the repetitive code around marshalling and unmarshalling to `JSON` and handles errors in a consistent manner. +The handler functions accept a `context` param and their expected struct input. +``` +func (s *Server) Register(mux *http.ServeMux) { +... + mux.HandleFunc("POST /", Post(201, s.Post)) +} + +func (s *Server) Post(ctx context.Context, req ContentPostReq) ([]Content, []ErrorResp) { + id := len(s.data) + content := Content{ + ID: id, + Foo: req.Foo, + } + s.data[id] = content + + return []Content{content}, nil +} +``` +As you can see, the post function is fairly cleaner now. + +You can extend this to all the other request types. If you have query or path parameters, you could either pass in the request, write a custom struct tag parser, or find someone else who has already done it: [https://github.com/gorilla/schema](https://github.com/gorilla/schema). |
