My personal tech stack
Problem
A recent obsession of mine has been a drive toward the concept of Everything-as-code, especially as applied to architecture and documentation. One of the things that I had been unable to fit into this push has been a Tech Radar of ours.
In general our documentation sits in the Gitlab repositories right alongside the code, while our radar sat on a Miro Board – putting it out of the way for many developers. This presented two problems:
- The Radar was not being referred to by developers, and therefore not being used.
- Technologies adopted were not being documented, and so the Radar was horribly out of date.
As part of a drive toward Everything-as-Code, I was searching for a way to move the Radar into our documentation. There were also other limitations to the ThoughtWorks Tech Radar that I considered addressing.
Objectives
Entering this challenge, I knew that the final solution needed to:
- Be effective at aligning all teams on the current technology stack
- Help constrain impulsive technology choices by ensuring a conscientious pathway to adoption
- Be as close as possible to meeting the ideal of “Everything-as-Code”
I also wanted to:
- Use the tool for onboarding to ensure new joiners know which technologies they need to master to be productive
- Provide a snapshot of what I am good at, and what I am not (personal use)
- Track technologies I am spending time on and which I have decided to abandon (personal use)
The ThoughtWorks Tech Radar and Key Observations
The radar in question was based off of the ThoughtWorks Tech Radar, a tool used to map technology landscapes. Each technology (called “blips” in the framework) is placed along 2 key dimensions: Quadrants that categorize technologies into types, and rings that categorize them into adoption statuses.
There are 4 technology types that ThoughtWorks uses for the radar. Categorizing technologies into the quadrants is not something critical to ThoughtWorks, it is just a way to break up the radar into topic areas.
The rings are more important, and represent ThoughtWorks’ view on the readiness for adoption of certain technologies.
Key observations
1. Embedding the Radar into Markdown is not easily supported by existing tooling.
There is tooling to build-your-own-radar in an official repository maintained by ThoughtWorks. The format however is more suited for hosting on a website, rather than embedding into markdown.
2: The Tech Radar is not intended to map the state of the organization’s tech landscape.
Two excerpts from the official ThoughtWorks Tech Radar FAQ will help us to understand this:
The Radar captures the experiences and learnings from Thoughtworkers based on the work they do on behalf of our clients. As a result, it cuts across technologies, industries and geographies. As a large client services company, with a long history in custom software development, we believe it represents a reasonable sample but no attempt is made to be comprehensive or to survey the market at large. [emphasis mine]
If a blip doesn’t move, it fades from the Radar.
The above tells us that the Radar is not meant as a way to map the competencies of an organization, but rather presents a sample of technologies that an organization finds interesting. So there is a mismatch between what it does, and what we need in an internal organization technology landscape tool.
3: There isn’t a clear overview of all technologies at a first glance.
We notice that the Radar doesn’t provide a clear overview of all technologies at a first glance. Instead it provides numbered blips, the technology of which is clear only when hovering over it.
4: It is not clear which technologies are more important that others.
The direction of which technology to start with is determined by the numbers on the blips, with the direction flowing in the following order: Techniques, Platforms, Tools, Languages & Frameworks. It also starts from adopt and moves outwards to hold. What significance this has is not clear to me, but my suspicion is that the order doesn’t matter – the numbers are likely meant simply to act as indexes.
This is because the circular format of the Radar breaks directionality. None of the 4 quadrants are clearly more important than the other, and the only direction we have to work with based on shape alone is from center to the outer edge. Even this directionality is broken by the cross that splits the quadrant.
The Zalando Tech Radar
Zalando too has a Tech Radar that borrows from ThoughtWorks, but attempts to fix observations #1 and #2.
It adapts the titles of the quadrants to its own internal categorizations making them more like a map of the organization’s technologies (solving #1).
It also adds the names of each blip directly to the radar itself, offering users a chance to quickly scan and identify what each blip represents (solving #2). Nonetheless, the position of the blips are still far from their labels – and each blip is also doubly labelled by its ring category.
Zalando was also unable (or did not see the need) to tackle #3; while #4 remains a problem.
Summary of Findings
From the above, our key observations are as follows:
- Embedding the Radar into Markdown is not easily supported by existing tooling.
- The Tech Radar is not intended to map the state of the organization’s tech landscape.
- There isn’t a clear overview of all technologies at a first glance.
- It is not clear which technologies are more important than others.
UX Solutioning – some iterations
Note that the examples below are a snapshot of my own personal tech stack, and are not representative of the organization I work for.
Despite my misgivings about its format, my first iterations of an Everything-as-code tech stack framework assumed the use of something like d3.js to generate a static image of the radar.
As part of design iteration in Figma, I simply borrowed the format of the ThoughtWorks Tech Radar, replacing the blips with labelled dots of each blip. One innovation made was to expand the Adopt ring, to provide more surface area for adopted technologies – which would necessarily be more numerous in a stable technology landscape.
What a confusing mess! That’s probably why both frameworks left out the labels in the radars.
I also considered the difficulty of needing to generate the above programmatically. Notwithstanding the clutter, positioning alone would be absolutely non-trivial.
Then it occurred to me: do I even need the Radar format? A grid or matrix approach would be much more straightforward to implement.
Given the amount of items in the Adopt and Hold rings, it made sense for the columns to have wider widths to hold more items.
But how to squeeze everything in? Did we have to have some kind of character limit? Should we have an ellipsis for long labels, and a hover interaction that would show the full label. Was this last thing even allowable for SVG images?
Then the thought hit me: Do we even need to generate this out as an image?
If the objective is everything-as-code, and we are already using Markdown, could we just keep things simple and use a Markdown table?
Final Form – Tech Adoption Lanes
Assess | Trial | Adopt | Hold | |
---|---|---|---|---|
Languages & Frameworks |
||||
Tools | ||||
Platforms | ||||
Techniques |
In this format, we borrow the shared language of the ThoughtWorks Tech Radar, especially its dimensions – except that we present them in a grid format.
The grid format also has the benefit of being simpler to construct, allowing us to emphasize ordering and prioritization.
Adoption Lifecycle
Technologies move rightwards along the lane based on their maturity within the organization. New technologies are placed in Assess, where they undergo evaluation. Like the ThoughtWorks Tech Radar, some technologies that sit here are just technologies of interest.
Technologies move rightwards into Trial when they start being POC-ed and used in a limited way in production. Those that have been in a production and proven themselves, or that the team has had a lot of experience using in production in the past, sit under Adopt.
Finally, we have the Hold column. Here we place technologies that we want to emphasize will be rejected, that we explicitly establish we will not adopt, or that were adopted in the past but have been or are now being sunset.
This orientation places Adopt as less prominent than Assess and Trial, but I like the idea of seeing new technologies flow from left to right, driving learning of the individual and the organization.
“Concreteness” of Technology Types
Technology types have also been arranged in terms of abstraction. Languages and Frameworks are the most concrete pieces that coders usually work with, and sit at the top.
Something I have noticed is that potential candidates or new joiners to an organization also care about these the most when referring to a tech stack.
We then move down the lanes. Tools are usually used to support coding, and all of these sit within platforms. Finally, techniques are the most abstract because they deal with development approaches and processes, not with coding directly.
Prioritizing Technology Items
Another way order is useful is in forcing us to think about which items are most important to the organization. Lists have a natural top-down flow, and we need to place the items that are most important at the top.
In an organizational tech stack, importance can be decided by which technology is most widely adopted, or which technology a new joiner should pick up first to be productive in the general stack.
In a personal tech stack, importance can be decided by which technology one is most proficient in, or prioritizes learning.
Restricting Technology Items in each Lane
A conscious decision has been made to also restrict the amount of items in the lanes.
No one is an expert at everything. An item-per-cell restriction forces us to think about which technologies are most important to us. This is an easy thing to extend if we want to allow more items, 5 is just my personal limit at the moment.
Additional items can then be added below the whole table diagram, detailed out in paragraph form or bullet points.
One tiny problem
I’ll admit, the markdown syntax that generates our adoption lanes is a horrific mess:
| | Assess | Trial | Adopt | Hold |
| - | ------ | ----- | ----- | ---- |
|Languages<br>& Frameworks |<li>Rust</li>|<li>NextJS</li><li>React Native</li><li>NestJS</li>|<li>JavaScript</li><li>TypeScript</li><li>Python</li><li>ExpressJS</li><li>Vite + ReactJS</li>|<li>Ruby + Ruby on Rails</li><li>Dart + Flutter</li><li>AngularJS</li><li>Create React App</li><li>C/C++</li>|
|Tools |<li>GraphQL</li>|<li>Cucumber</li><li>Playwright</li><li>Metro</li><li>Radix</li>|<li>Nx</li><li>K8s & Docker</li><li>Jest</li><li>PostgreSQL</li><li>Markdown</li>|<li>TensorFlow</li><li>Selenium</li>|
|Platforms |<li>Raspberry PI</li>|<li>Azure Cloud</li><li>Novu</li>|<li>Vultr Cloud</li><li>Gitlab & Gitlab CI/CD</li><li>Keycloak</li><li>JIRA</li><li>Camunda Platform</li>|<li>AWS</li><li>Trello</li><li>Metaflow</li><li>BeagleBone black</li><li>Arduino</li>|
|Techniques|<li>Micro FEs</li><li>Event-driven Ar.</li>||<li>User Story Mapping</li><li>Architecture-as-code</li><li>TDD/BDD</li><li>Scrum & Kanban</li><li>Developer Experience</li>|</li><li>Waterfall</li><li>Native Mobile Apps</li><li>Performance</li>|
But given that we revisit this only once in a while, maybe that’s alright?
If this post was useful to you, consider dropping a star on my github repo where I provide a shorter guide on using this framework!