My inputs in February

Published 6 Mar 2024 - 3 min read

This week we have observability, tooling, aws lambda architecture and some AI.

To read

DotSlash: Simplified executable deployment: We are getting used to great tooling opensourced by Meta. This month they open-sourced DotSlash, a tool that makes executables available in source control with a negligible impact on the repository size. Their solution is not to add the binary to the repo but to use a JSON-like file with platform-specific descriptors for the executable. DotSlash will then process this file, get the right binary and cache it. On a second invocation, DotSlash will execute the binary on the cache.

I Ran A Background Check On A Startup: Employers frequently conduct background checks on prospective hires.This piece offers a glimpse into what such a background check on an employer may look like. It delves into examining a company’s funding history, Glassdoor evaluations, and legal records.

Overview of Cloudflare’s logging pipeline: is a great example of how logging pipelines can work at scale, courtesy of Cloudflare. There is a great focus on availability.

All you need is Wide Events, not “Metrics, Logs and Traces”: A great take on observability, what it shouldn’t be, how it is at Meta and what gets close to it (spoiler: Honeycomb). Definitely worth reading.

Resolving Code Review Comments with Machine Learning. We start with the first paper in the reading section, this one is from Google and they describe a system to automatically resolve code-review comments in the day-to-day development workflow. Code-change authors at Google address 7.5% of all reviewer comments by applying an ML-suggested edit, which is a significant productivity boost at Google’s scale. I liked the emphasis on the User Experience here. This is just the beginning of tools like this, and a huge opportunity for GitHub to deploy Copilot in the PR workflow

The second paper of the day is On-demand Container Loading in AWS Lambda. The AWS team describes how they scaled Lambda to start up 15,000 containers a second as large as 10GiB each. While simply moving and unpacking a 10GiB image for each of these 15,000 containers would require 150Pb/s of network bandwidth, their solution combines caching, deduplication, erasure coding, and sparse loading. With out adding any customer visible complexity (they simply upload a container image to a convenient repository), Lambda is able to achieve this scale and cold-start latency goals. There are a lot of interesting ideas and in depth considerations about design’s tradeoffs.

To watch:

Jeff Dean (Google): Exciting Trends in Machine Learning. Amazing lecture from Jeff Dean on state of the art in AI. He starts from the very beginning, describing the research on transforms, and how hardware changed to enable the latest breakthrough in AI. He also talks about all the latest trends in Gen AI, multimodal models, and models for targeted problems. A must-see for anyone interested in AI.

That’s all folks, see you next month.

Get emails about new articles!

I write about Continuous Integration, Continuous Deployment, testing, and other cool stuff.
Gaspare Vitta on Twitter