From 6beea1d4127d2d51bfdc75162423407c198d19da Mon Sep 17 00:00:00 2001 From: a73x Date: Sat, 21 Dec 2024 11:53:54 +0000 Subject: add date to posts --- public/posts/2024-08-25-01 | 128 ++++++++++++++++++ public/posts/2024-08-25-01.html | 2 + public/posts/2024-08-25-02 | 164 ++++++++++++++++++++++ public/posts/2024-08-25-02.html | 2 + public/posts/2024-08-25-03 | 89 ++++++++++++ public/posts/2024-08-25-03.html | 2 + public/posts/2024-08-26-01 | 291 ++++++++++++++++++++++++++++++++++++++++ public/posts/2024-08-26-01.html | 2 + public/posts/2024-08-31-01 | 107 +++++++++++++++ public/posts/2024-08-31-01.html | 2 + public/posts/2024-09-04-01 | 176 ++++++++++++++++++++++++ public/posts/2024-09-04-01.html | 2 + public/posts/2024-09-08-01 | 276 +++++++++++++++++++++++++++++++++++++ public/posts/2024-09-08-01.html | 2 + public/posts/2024-11-09-01 | 97 ++++++++++++++ public/posts/2024-11-09-01.html | 2 + public/posts/2024-12-08-01 | 109 +++++++++++++++ public/posts/2024-12-08-01.html | 2 + public/posts/2024-12-08-02 | 79 +++++++++++ public/posts/2024-12-08-02.html | 19 ++- 20 files changed, 1547 insertions(+), 6 deletions(-) create mode 100644 public/posts/2024-08-25-01 create mode 100644 public/posts/2024-08-25-02 create mode 100644 public/posts/2024-08-25-03 create mode 100644 public/posts/2024-08-26-01 create mode 100644 public/posts/2024-08-31-01 create mode 100644 public/posts/2024-09-04-01 create mode 100644 public/posts/2024-09-08-01 create mode 100644 public/posts/2024-11-09-01 create mode 100644 public/posts/2024-12-08-01 create mode 100644 public/posts/2024-12-08-02 (limited to 'public/posts') diff --git a/public/posts/2024-08-25-01 b/public/posts/2024-08-25-01 new file mode 100644 index 0000000..4d55123 --- /dev/null +++ b/public/posts/2024-08-25-01 @@ -0,0 +1,128 @@ + + + + + + + + + + a73x + + + + + + +
+
+
+

a73x

+ high effort, low reward +
+

[{home /} {posts /posts} {ethos /ethos}]

+

posts/2024-08-25-01.html

+ +
+ +← Posts +

Go Benchmarking

+ +

The benchmark cycle:

+ +
    +
  1. write a benchmark
  2. +
  3. run a benchmark
  4. +
  5. get a profile
  6. +
  7. optimise
  8. +
  9. run your tests
  10. +
  11. goto 2.
  12. +
+ +

cpuprofile

+
go test -test=XXX -bench <regex> -cpuprofile <file>
+
+

memprofile

+
go test -test=XXX -bench <regex> -memprofile <file> -benchmem
+
+

pprof

+ +

pprof usage

+
go pprof -http=:8080 profile.pb.gz
+
+

will show a web UI for analysing the profile.

+ +

views

+ + + + +
+
+
+ ​​​​​​​​​​​​​​​​​​​
​ +

see something you disagree with? email: yourewrong@a73x.sh

+
+
+ + + + diff --git a/public/posts/2024-08-25-01.html b/public/posts/2024-08-25-01.html index 1ece8bc..f2b48cf 100644 --- a/public/posts/2024-08-25-01.html +++ b/public/posts/2024-08-25-01.html @@ -11,6 +11,7 @@ a73x + @@ -39,6 +40,7 @@ +← Posts

Go Benchmarking

+← Posts

Go Project Layouts

Do you lay awake at night and consider how to optimally layout your Go project? No…? what about recommending Windows to a friend or colleague?? diff --git a/public/posts/2024-08-25-03 b/public/posts/2024-08-25-03 new file mode 100644 index 0000000..132d95d --- /dev/null +++ b/public/posts/2024-08-25-03 @@ -0,0 +1,89 @@ + + + + + + + + + + a73x + + + + + + +

+
+
+

a73x

+ high effort, low reward +
+

[{home /} {posts /posts} {ethos /ethos}]

+

posts/2024-08-25-03.html

+ +
+ +← Posts +

Levels of Optimisation

+ +

This probably isn’t strictly true, but it makes sense to me. +We’ve got three levels of “optimisation” (assuming your actual design doesn’t suck and needs optimising).

+ +

Benchmark Optimisation

+ +

To begin with, we have benchmark optimisation; you create a benchmark locally, dump a profile of it, and optimise it. Then, you run your tests because the most optimal solution is “return nil” and make sure you didn’t break your tests. +This is the first and easiest optimisation because it only requires a function, nothing else, and can be done in isolation. You don’t need a working “application” here, just the function you’re trying to benchmark. There are different types of benchmarks, micro, macro, etc., but I’m leaving them out of scope for this conversation. Go read Efficient Go.

+ +

Profile guided optimisation

+ +

This is a mild step up from benchmark optimisation only because you need a live server load from which you use to pull a profile, but it is probably the most hands-off step. You import the net/http/pprof package into your service, call the debug/profile?seconds=30 to get a profile, and compile your binary with go build -pgo=profile.pgo. The compiler will make optimisations for you, and even if your profile is garbage, it shouldn’t cause any regressions.

+ +

You probably want to get a few profiles and merge them using go tool pprof -proto a.out b.out > merged. This will help provide optimisations that are more relevant to your overall system; instead of just a single 30s slice. +Also, if you have long-running calls that are chained together, a 30-second snapshot might not be enough, so try a sample with a longer window.

+ +

Runtime optimisation

+ +

This is where you expose /runtime/metrics and monitor them continuously. There’s a list of metrics that you might be interested in, a recommended set of metrics, and generally, you are looking to optimise your interactions with the go runtime. There are a few stats here: goroutine counts, goroutines waiting to run, heap size, how often garbage collection runs, how long garbage collection takes, etc. All useful information to use when optimising - when garbage collection is running, your program ain’t. It’s also useful for finding memory leaks; it becomes pretty obvious you are leaking goroutines when you graph the count and just watch it go up and never down. +It’s also just lowkey fun to look at the exposed data and understand what your system is doing.

+ + +
+
+
+ ​​​​​​​​​​​​​​​​​​​
​ +

see something you disagree with? email: yourewrong@a73x.sh

+
+
+ + + + diff --git a/public/posts/2024-08-25-03.html b/public/posts/2024-08-25-03.html index 958e495..5112674 100644 --- a/public/posts/2024-08-25-03.html +++ b/public/posts/2024-08-25-03.html @@ -11,6 +11,7 @@ a73x + @@ -39,6 +40,7 @@ +← Posts

Levels of Optimisation

+← Posts

Writing HTTP Handlers

I’m sharing how I write handlers in Go.

diff --git a/public/posts/2024-08-31-01 b/public/posts/2024-08-31-01 new file mode 100644 index 0000000..1c25a70 --- /dev/null +++ b/public/posts/2024-08-31-01 @@ -0,0 +1,107 @@ + + + + + + + + + + a73x + + + + + + +
+
+
+

a73x

+ high effort, low reward +
+

[{home /} {posts /posts} {ethos /ethos}]

+

posts/2024-08-31-01.html

+ +
+ +← Posts +

Go's unique pkg

+

https://pkg.go.dev/unique

+ +
+

The unique package provides facilities for canonicalising (“interning”) comparable values.1

+
+ +

oh yeah, thats obvious I fully understand what this package does now, great read, tune in for the next post.

+ +

Interning, is the re-use of an object of equal value instead of creating a new one. I’m pretending I knew this but really I’ve just reworded Interning.

+ +

So lets try again.

+ +

If you’re parsing out a csv of bank transactions, its very likely a lot of names will be repeated. Instead of allocating memory for each string representing a merchant, you can simply reuse the the same string.

+ +

So the dumbed down version might look like

+
var internedStrings sync.Map
+
+func Intern(s string) string {
+	if val, ok := internedStrings.Load(s); ok { 
+		return val.(string) 
+	} 
+	internedStrings.Store(s, s) 
+	return s 
+}
+
+

With a small demo here https://go.dev/play/p/piSYjCHIcLr

+ +

This implementation is fairly naive, it can only grow and it only works with strings, so naturally go’s implementation is a better.

+ +

It’s also worth noting, that since strings are a pointer under the hood

+ +
+

When comparing two strings, if the pointers are not equal, then we must compare their contents to determine equality. But if we know that two strings are canonicalized, then it is sufficient to just check their pointers.2

+
+ +

So to recap, goes unique package provides a way to reuse objects instead of creating new ones, if we consider the objects of equal value.

+ + + + +
+
+
+ ​​​​​​​​​​​​​​​​​​​
​ +

see something you disagree with? email: yourewrong@a73x.sh

+
+
+ + + + diff --git a/public/posts/2024-08-31-01.html b/public/posts/2024-08-31-01.html index 63ae11c..6c3924f 100644 --- a/public/posts/2024-08-31-01.html +++ b/public/posts/2024-08-31-01.html @@ -11,6 +11,7 @@ a73x + @@ -39,6 +40,7 @@ +← Posts

Go's unique pkg

https://pkg.go.dev/unique

diff --git a/public/posts/2024-09-04-01 b/public/posts/2024-09-04-01 new file mode 100644 index 0000000..4b41b49 --- /dev/null +++ b/public/posts/2024-09-04-01 @@ -0,0 +1,176 @@ + + + + + + + + + + a73x + + + + + + +
+
+
+

a73x

+ high effort, low reward +
+

[{home /} {posts /posts} {ethos /ethos}]

+

posts/2024-09-04-01.html

+ +
+ +← Posts +

Kubernetes Intro

+ +

My crash course to Kubernetes. +You’re welcome.

+ +

Pods

+ +

In Kubernetes, if you wish to deploy an application the most basic component you would use to achieve that, is a pod. A Pod represents the smallest deployable unit in Kubernetes, encapsulating one or more containers that need to work together. While containers run the actual application code, Pods provide the environment necessary for these containers to operate, including shared networking and storage.

+ +

A Pod usually represents an ephemeral single instance of a running process or application. For example, a Pod might contain just one container running a web server. In more complex scenarios, a Pod could contain multiple containers that work closely together, such as a web server container and a logging agent container.

+ +

Additionally we consider a Pod as ephemeral because when a Pod dies, it can’t be brought back to life—Kubernetes would create a new instance instead. This behaviour reinforces the idea that Pods are disposable and should be designed to handle failures gracefully.

+ +

When you use Docker, you might build a image with docker build . -t foo:bar and run a container with docker run foo:bar. In Kubernetes, to run that same container, you place it inside a Pod, since Kubernetes manages containers through Pods.

+
apiVersion: v1
+kind: Pod
+metadata:
+  name: <my pod name here> 
+spec:
+  containers:
+  - name: <my container name here> 
+    image: foo:bar
+
+

In this YAML manifest, we define the creation of a Pod using the v1 API version. The metadata field is used to provide a name for identifying the Pod within the Kubernetes cluster. Inside the spec, the containers section lists all the containers that will run within that Pod.

+ +

Each container has its own name and image, similar to the --name and image parameters used in the docker run command. However, in Kubernetes, these containers are encapsulated within a Pod, ensuring that they are always co-scheduled, co-located, and share the same execution context.

+ +

As a result, containers within a Pod should be tightly coupled, meaning they should work closely together, typically as parts of a single application or service. This design ensures that the containers can efficiently share resources like networking and storage while providing a consistent runtime environment.

+ +

Why Multiple Containers in a Pod?

+ +

Sometimes, you might need multiple containers within a single Pod. Containers in a Pod share the same network namespace, meaning they can communicate with each other via localhost. They also share storage volumes, which can be mounted at different paths within each container. This setup is particularly useful for patterns like sidecars, where an additional container enhances or augments the functionality of the main application without modifying it.

+ +

For example, imagine your application writes logs to /data/logs. You could add a second container in the Pod running fluent-bit, a tool that reads in files and sends them to a user defined destination. fluent-bit reads these logs and forwards them to an external log management service, without changing the original application code. This separation also ensures that if the logging container fails, it won’t affect the main application’s operation.

+ +

When deciding what containers go in a pod, consider how they’re coupled. Questions like “how should these scale” might be helpful. If you have two containers, one for a web server and one for a database, as your web server traffic goes up, it doesn’t really make sense to start creating more instances of the database. So you would put your web server in one pod and your database in another, allowing Kubernetes to scale them independently. On the other hand a container which shares a volume with the web server would need to scale on a 1:1 basis, so they go in the same pod.

+ +

Pod Placement

+ +

When a Pod is created, Kubernetes assigns it to a Node—a physical or virtual machine in the cluster—using a component called the scheduler. The scheduler considers factors like resource availability, node labels, and any custom scheduling rules you’ve defined (such as affinity or anti-affinity) when making this decision. Affinity means the pods go together, anti-affinity means keep them on separate nodes. Other rules can be used to direct Pods to specific Nodes, such as ensuring that GPU-based Pods run only on GPU-enabled Nodes.

+ +

Scaling

+ +

In practise, you won’t be managing pods manually. If a pod crashes, manual intervention would be required to start a new Pod and clean up the old one. Fortunately, Kubernetes provides controllers to manage Pods for you: Deployments, StatefulSets, DaemonSets, and Jobs.

+ + + +

Deployments and StatefulSets also support scaling mechanisms, allowing you to increase or decrease the number of Pods to handle varying levels of traffic.

+ +

Services

+ +

As your application scales and you handle multiple Pods, you need a way to keep track of them for access. Since Pods can change names and IP addresses when they are recreated, you need a stable way to route traffic to them. This is where Kubernetes services come into play.

+ +

Services provide an abstraction layer that allows you to access a set of Pods without needing to track their individual IP addresses. You define a selector in the Service configuration and traffic reaching the Service is routed to one of the Pods matching the selector.

+ +

There are four types of services in Kubernetes: ClusterIP, NodePort, LoadBalancer, and ExternalName.

+ + + +

Volumes

+ +

Broadly speaking, Kubernetes offers two types of storage: ephemeral and persistent volumes.

+ + + +

Understanding storage in Kubernetes can be a bit complex due to its abstraction and reliance on third-party controllers. Kubernetes uses the Container Storage Interface (CSI), a standardised specification that allows it to request storage from different providers, which then manage the lifecycle of the underlying storage. This storage could be anything from a local directory on a node to an AWS Elastic Block Store (EBS) volume. Kubernetes abstracts the details and relies on the CSI-compliant controller to handle the specifics.

+ +

Key Components of Kubernetes Storage

+ +

There are three main components to understand when dealing with storage in Kubernetes: Storage Classes, PersistentVolumes (PVs), and PersistentVolumeClaims (PVCs).

+ +
    +
  1. Storage Class: A Storage Class defines the type of storage and the parameters required to provision it. Each Storage Class corresponds to a specific storage provider or controller. For example, a Storage Class might define a template for AWS EBS volumes or Google Cloud Persistent Disks.
  2. +
  3. PersistentVolume (PV): A PersistentVolume represents a piece of storage in the cluster that has been provisioned according to the specifications in a Storage Class. PVs can be either statically created by a cluster administrator or dynamically provisioned by a controller. For instance, when a Storage Class is associated with AWS, the controller might create an EBS volume when a PV is needed.
  4. +
  5. PersistentVolumeClaim (PVC): A PersistentVolumeClaim is a user’s request for storage. It specifies the desired size, access modes, and Storage Class. When a PVC is created, Kubernetes will find a matching PV or trigger the dynamic provisioning of a new one through the associated Storage Class and controller. Once a PV is provisioned, it becomes bound to the PVC, ensuring that the requested storage is dedicated to that specific claim.
  6. +
+ +

How It Works Together

+ +

The typical workflow involves a user creating a PersistentVolumeClaim to request storage. The CSI controller picks up this request and, based on the associated Storage Class, dynamically provisions a PersistentVolume that meets the user’s specifications. This PersistentVolume is then bound to the PersistentVolumeClaim, making the storage available to the Pod that needs it.

+ + +
+
+
+ ​​​​​​​​​​​​​​​​​​​
​ +

see something you disagree with? email: yourewrong@a73x.sh

+
+
+ + + + diff --git a/public/posts/2024-09-04-01.html b/public/posts/2024-09-04-01.html index e45a1e8..629e772 100644 --- a/public/posts/2024-09-04-01.html +++ b/public/posts/2024-09-04-01.html @@ -11,6 +11,7 @@ a73x + @@ -39,6 +40,7 @@ +← Posts

Kubernetes Intro

+← Posts

Building a Static Site with Hugo and Docker

+← Posts

Refactoring

+← Posts

Simplifying Interfaces with Function Types

In Go, you can define methods on type aliases, which means that we can define a type alias of a function, and then define methods on that function.

diff --git a/public/posts/2024-12-08-02 b/public/posts/2024-12-08-02 new file mode 100644 index 0000000..34eea36 --- /dev/null +++ b/public/posts/2024-12-08-02 @@ -0,0 +1,79 @@ + + + + + + + + + + a73x + + + + + + +
+
+
+

a73x

+ high effort, low reward +
+

[{home /} {posts /posts} {ethos /ethos}]

+

posts/2024-12-08-02.html

+ +
+ +← Posts +

You and Your Career

+
+

The unexamined life is not worth living

+
- Socrates (probably)
+
+ +

If you’ve not listened to You and Your Research by Richard Hamming, I highly recommend it—you can also read the transcript here. Hamming’s talk emphasises the importance of examining your career and the choices you make. Whilst the talk specifically speaks about research, it’s fairly easy to draw parallels to software engineering—you only have one life, so why not do significant work?

+ +

Hamming doesn’t define significant work, nor will I—it’s deeply personal and varies for each individual. Instead, he outlines traits he’s seen in others that have accomplished significant work.

+ +
    +
  1. Doing significant work often involves an element of luck, but as Hamming puts it, ‘luck favours the prepared mind’. Being in the right place at the right time will only matter if you have the knowledge or skills to execute. Creating opportunities is important but wasted if you aren’t ready to perform. Conversely, having the ability to perform but no opportunities can feel just as futile. The key lies in striking a balance—cultivating both readiness and the conditions where luck can find you.
  2. +
  3. Knowledge and productivity are like compound interest—consistent effort applied over time leads to exponential growth. In a field as vast as software, even a surface-level awareness of an idea can prove valuable when the time comes to use it, while deeper understanding builds expertise. Mastering the fundamentals is crucial; they are learned once and applied repeatedly in various contexts.
  4. +
  5. Maintain enough self-belief to persist, but enough self-awareness to recognise and adapt to mistakes. Software is more akin to an art than a science—there are no perfect solutions, only trade-offs. Success often lies in minimising the disadvantages rather than chasing absolutes.
  6. +
  7. To do important work, focus on important problems. What do you care about deeply? How can you work to address it? The answer is personal, but I think there’s great importance is reflecting on it, even if the answer changes.
    +
  8. +
+ +

These aren’t all the points Hamming raises, but they are some of the points that stuck with me; I highly recommend listening to the talk yourself so that you may draw your own conclusions.

+ +

The average career is only 80,000 hours, so spend it well.

+ + +
+
+
+ ​​​​​​​​​​​​​​​​​​​
​ +

see something you disagree with? email: yourewrong@a73x.sh

+
+
+ + + + diff --git a/public/posts/2024-12-08-02.html b/public/posts/2024-12-08-02.html index 0c5ca17..7bbb4a0 100644 --- a/public/posts/2024-12-08-02.html +++ b/public/posts/2024-12-08-02.html @@ -11,6 +11,7 @@ a73x + @@ -39,22 +40,28 @@ +← Posts

You and Your Career

The unexamined life is not worth living

-
- Socrates (probably)
+
- Socrates
 
-

If you’ve not listened to You and Your Research by Richard Hamming, I highly recommend it—you can also read the transcript here. Hamming’s talk emphasises the importance of examining your career and the choices you make. Whilst the talk specifically speaks about research, it’s fairly easy to draw parallels to software engineering—you only have one life, so why not do significant work?

+

Dr Richard Hamming was a prolific individual. You may recognise the name from “Hamming codes”, which he invented in 1950 during his time at Bell Labs—an invention which, amongst other achievements, resulted in being awarded a Turing Award. He was also awarded the Emanuel R. Piore Award in 1979 by the IEEE, and in 1986 the IEEE also created The Richard W. Hamming Medal in his honour.

-

Hamming doesn’t define significant work, nor will I—it’s deeply personal and varies for each individual. Instead, he outlines traits he’s seen in others that have accomplished significant work.

+

On March 7 1986, Dr Richard Hamming delivered a talk centred around the question “Why do so few scientists make significant contributions and so many are forgotten in the long run?” He’d spent his time observing great scientists at Los Alamos during the war and the 30 years he spent at Bell Labs and learnt a great deal from them.

+ +

The talk was titled You and Your Research, and if you’ve not listened to it you can do so here or you read the transcript here.

+ +

Hamming’s talk emphasises the importance of examining your career and the choices you make. Whilst the talk specifically speaks about research, it is fairly easy to draw parallels to software engineering—you only have one life, so why not do significant work?

+ +

Hamming doesn’t define significant work, he leaves that to the listener to determine—it’s deeply personal and varies for each individual. Instead, he outlines traits he’s seen in others that have accomplished significant work.

  1. Doing significant work often involves an element of luck, but as Hamming puts it, ‘luck favours the prepared mind’. Being in the right place at the right time will only matter if you have the knowledge or skills to execute. Creating opportunities is important but wasted if you aren’t ready to perform. Conversely, having the ability to perform but no opportunities can feel just as futile. The key lies in striking a balance—cultivating both readiness and the conditions where luck can find you.
  2. Knowledge and productivity are like compound interest—consistent effort applied over time leads to exponential growth. In a field as vast as software, even a surface-level awareness of an idea can prove valuable when the time comes to use it, while deeper understanding builds expertise. Mastering the fundamentals is crucial; they are learned once and applied repeatedly in various contexts.
  3. -
  4. Maintain enough self-belief to persist, but enough self-awareness to recognise and adapt to mistakes. Software is more akin to an art than a science—there are no perfect solutions, only trade-offs. Success often lies in minimising the disadvantages rather than chasing absolutes.
  5. -
  6. To do important work, focus on important problems. What do you care about deeply? How can you work to address it? The answer is personal, but I think there’s great importance is reflecting on it, even if the answer changes.
    -
  7. +
  8. Maintain enough self-belief to persist, but enough self-awareness to recognise and adapt to mistakes. I’m honestly very fond of this idea, software is more akin to an art than a science—there are no perfect solutions, only trade-offs. Too often, I’ve seen people double down on unquestioningly defending their ideas instead of listening. Success usually lies in minimising the disadvantages rather than chasing absolutes.
  9. +
  10. To do important work, focus on important problems. As what is important is defined by you, what do you care about deeply? How can you work to address it? The importance of your career can only really be answered by you, and I think reflecting on what you’re doing helps keep you on the right path.

These aren’t all the points Hamming raises, but they are some of the points that stuck with me; I highly recommend listening to the talk yourself so that you may draw your own conclusions.

-- cgit v1.2.3