Advent of Code

I love December. Christmas, New Year, a week or 2 holidays, a look back at the year’s achievements and… AdventOfCode. AdventOfCode is an annual programming challenge consisting of puzzles that can be solved in any programming language. The event is aimed at people with different levels of programming skills and offers an opportunity to improve our skills and learn new techniques. Challenges are launched daily during the month of December until the 25th, at Christmas. There are always 2 problems and for each one solved you get a star.

I’ll be honest and say that until 2022 I solved some challenges but I was never consistent. This year I decided to take it seriously and by the 8th I had collected all 16 stars. And as I said above, I’m seeing it as a learning opportunity: every problem has an example with a solution, so here I am doing TDD, the classic red-green-refactor cycle; every day you download a text file as input and solve for each of the stars, so here I am with an automated template in node that generates a common structure of files and folders; Problems can be solved in several ways, so here I am calculating the performance of my solution and optimising; I’m publishing my answers day by day, so here I am commenting on my code so that my intentions are clear; As there is one issue per day, so here I am looking at each day as a release, I automated the generation of releases on Github using release-please.

My strategy every day has been the most naive, I try to solve everything by brute force. When it takes too long, I stop and think of something better. If it’s acceptable (well, it’s quite subjective here, anything running in less than 3 minutes, ok), I send the answers and at the end of the day I revisit the problem and reflect on whether there’s a better way to do it. For example, one day it was necessary to calculate paths using a map with instructions. My first approach was with a loop, after solving it I realised that I could use the least common multiple of the different paths. Nice!

Let’s see if I can keep up with the issues, it’s been a lot of fun, I highly recommend it!

OpenAI Dev Day

I need to talk about the OpenAI keynote at the developer event this week ( There was a perception that ChatGPT was in decline, that the responses were not so good. In this tweet, Mike Young lists several threads on Reddit with the same complaint (; this paper ( quantifies the drop in accuracy in the algorithm); and, in this interview (, neuroscientist Álvaro Dias also says that ChatGPT is getting worse. Add to this the ongoing lawsuits for misuse (,of%20the%20firms’ %20AI%20systems.
,,, -york-times-considers-legal-action-against-openai-as-copyright-tensions-swirl). It didn’t seem like a good time but the new news about the event puts ChatGPT in the headlines again.

Now there is the possibility of creating customized versions of ChatGPT for specific needs, without needing to know how to code. I already said that my parameter here is my 16-year-old son who uses the chat every day. He sat down with me on Friday and we did a personalized chat, which receives a text written by him and corrects it using the standards established by the school. We did a test and the recommendations are much more accurate comparing the “normal” model and the “international school” model. And he can share this with his colleagues, it’s a steroid version of custom prompts. OpenAI understood that the common user needs to make their lives easier.

Another example of this is the ability to send attachments (currently limited to 10 files). Before you needed to use a plugin, which sent the document to a service and from there ChatGPT analysed it. We tested this with a meteorological data file, 80MB and bzipped. He was able to open the file and do the analysis. Which brings me to another new feature, the “Data Analysis” agent. This could already be done before, but this personalised chat makes it even easier to discover trends and anomalies. In that file of mine, I must admit that the graphics he put together were better than mine ¯\_(ツ)_/¯

One more? You don’t have to switch modes to create images or browse the web. The algorithm can determine which mode you need for the interaction.

And for the corporate world, two messages: Sam stressed that ChatGPT Enterprise does not use the information submitted for model training, i.e., there is no risk of your spreadsheet with the last quarter’s sales data appearing to your competitor. And that OpenAI has an initiative, called “Copyright Shield”, in which it will bear the legal costs of actions for copyright infringement. The company defends its use of data under the claim of “fair use” under US copyright law, a standard that allows for a more liberal interpretation of copyright law in line with American ideals of free expression.


Arguments and results

After returning from API Days in London, I’m studying design patterns for APIs. The last paper I read was “Arguments and Results” by James Noble (1), published in 1997 and still extremely relevant. Part of my current work, developing a data warehouse, is to optimize the transport of huge volumes of data. And if I want to use APIs, how can I do this efficiently? There is a consensus that REST API’s are not suitable in high-volume scenarios and studying this topic I arrived at Noble.

The article discusses patterns for object protocols and for this there are two groups:

  • How to send objects (Arguments): Arguments object, Selector object and Curried object
  • How to receive objects (Results): Result object, Future object and Lazy object


  • Arguments object: This pattern is utilised to streamline a method’s signature by consolidating common arguments and objects, and modifying the method signature to accept this consolidated object instead. It brings several advantages, you make your domain explicit; your customers can use what they have at hand (the objects, instead of separated values); and you are able to separate the processing logic from your business logic


    void Send-FixedIncome(EntityName, IssueDate, MaturityDate, PrincipalAmount, ...) { }


    void Send-FixedIncome(FixedIncome) { }

“adding an eleventh argument to a message with ten arguments is qualitatively quite different to adding a second argument to a unary message”

  • Selector object: When you have methods with quite similar arguments, this patterns introduces a selector that enables you to pick one of them. Noble mentioned you could a enum, maybe using GoF a Flyweight would do the trick.


    void CalculateHomeLoanAmount(HomeLoan, Collateral) { }
    void CalculateHomeLoanAmount(HomeLoan) { }
    void CalculateAutoLoanAmount(AutoLoan) { }
    void CalculatePersonalLoanAmount(PersonalLoan) { }


    void CalculateAmount(LoanType, Loan) { } //Loan would be the interface for all loan types

“Protocols (APIs) where many messages perform similar functions are often difficult to learn and to use, especially as the similarity is often not obvious from the protocol’s documentation”

  • Curried object: from the functional programming world, currying breaks a function with several arguments into smaller functions, where each one of those functions takes some (usually one) of the original arguments; these functions are called in sequence then. Noble introduces this pattern for a scenario where the caller should not need to supply all arguments (e.g. constants), the method can supply on their behalf. In GoF terms, it acts as a proxy.


    void SettleDebt(Loan, ParcelsToPay) { } # no. of parcels can be retrieved from the loan 


    int GetOpenParcels(Loan) { }
    void SettleDebt(Loan) { () => Settle(Loan) } # in Settle no. of parcels is retrieved

“These kinds of arguments increase the complexity of a protocol. The protocol will be difficult to learn, as programmers must work out which arguments must be changed, and which must remain constant.”

  • Results object: For complex scenatios, one might need several method calls to generate the expected result. Your results object should be a single one with everything on it. This pattern can be seen as the flip side of the Curried Object pattern. One aspect worth mentioning is when those several calls mean several systems. In this case. a result object helps to reduce the coupling and acts as an abstraction around them.

“Perhaps the computation returns more than on object”

  • Future object: This pattern is employed when we need to execute a time-consuming operation and perform other tasks while waiting for the operation to complete. In essence, we aim to process a result asynchronously and then likely invoke a callback upon completion.

” Sometimes you need to ask a question, then do something else while waiting for the answer to arrive”

  • Lazy object: Sometimes you need to supply a result, though it is not 100% sure whether the method will be called. The advantage of this is that we don’t get data unnecessarily.

“Some computations can best be performed immediately but the computation’s result may never be needed”

All in all, these concepts are quite widespread. Any mainstream programming language has support for async methods (Future object) and lazy evaluation (Lazy object). Moreover, good OO design makes ‘Arguments object’ and ‘Seletor object’ quite naturally appear. Even though it was a good back-to-basics article to remind me the importance of good design in my API methods.


CSS made easy

Damien Riehl is a technology lawyer and musician. And he doesn’t agree with copyright infringement lawsuits, for him music is mathematics. And if you remember that there are only 8 musical notes, Damien had the brilliant idea of making an algorithm that made all the combinations and put it in the public domain. According to him, this can help in those cases where someone is prosecuted just because they used a combination used by another, and the second person didn’t even know. See the faq at to understand better.

I’m not a lawyer, nor am I defending piracy, what caught my attention was the idea of noting that there is a finite space of possibilities. I think there is a fascination in being able to say ‘here’s everything about topic X’, personally that’s what attracted me to the master’s degree. I won’t be arrogant and say that I know everything about Design Thinking but I approached my studies with this objective.

Another topic: Think about the design of an HTML or SPA page, how many different ways are there to do it? We use CSS to control what is displayed. The colors are finite, ranging from #000000 to #FFFFFF; borders are top-bottom-left-right; and so on. Adam Wathan noticed this and developed TailwindCSS. With TailWind, you can write your CSS without leaving your HTML page. See the difference, before, if I wanted my text to be blue and bold, I would do this:


color: blue;
font-weight: bold;


<p class="info>
  Lorem Ipsum

And voilà, bold blue text. TailWindCSS allows you to write directly and it has a library with hundreds and hundreds of possibilities. As soon as you use it in your html, it automatically generates the CSS. This is a detail, an application runs in your terminal and scans your pages to know what to generate:


<p class="text-blue font-bold">
  Lorem Ipsum

or very large, bold, underlined, sky blue text centered on the page:

<p class="text-3xl font-bold underline text-sky-400 text-center">
  Lorem Ipsum

UnoCSS takes the idea further and eliminates the css file completely. In UnoCSS there is a script that analyses your pages at runtime and generates classes for you. And there is no need for an application running in the background. Magic! Note that the way of writing is exactly the same as TailWindCSS.


<p class="text-blue font-bold">
  Lorem Ipsum

I’ve been using UnoCSS a lot. But I have my criticisms, the large number of classes can make HTML code verbose, and it’s a fact, your HTML is difficult to understand. And the CSS generated by TailWindCSS is only readable by… TailWind developers 😀.

Additionally, the framework’s reliance on utility classes can lead to a lack of consistency in design across the site, as different people may use different classes to achieve similar effects. And since there is no CSS file maintained by the team, the need to document style choices is essential.

Still, the practicality seems to me to show that frameworks like these will be increasingly popular.

Technology Radar

A new edition of ThoughtWorks’ Technology Radar was published this week.

To no one’s surprise, AI is the big theme of this edition, I was however attracted by 2 items:


Mermaid is in the adoption quadrant.

In my current project at ING, we used documentation-as-code from the beginning, except that our choice is PlantUML. What I can comment on my experience, in 3 aspects:

  • Consistency: By treating documentation as code, it becomes easier to maintain consistency between the code and its documentation, especially when it comes to software architecture. We use C4 and when we make structural changes, the diagrams and the new code are versioned in the same PR, showing exactly the evolution of the system;
  • Collaboration: Between engineers is great because it sounds more natural to edit documentation in the same flow as the software is edited. Outside the world of engineers, there is a barrier as it is necessary to know the markup and a point-and-click interface is more intuitive. This is evident in the discussion of the software context, in which interaction with business colleagues is necessary;
  • Automation: This is where documentation-as-code shines. If the team culture encourages comments on your codebase, several diagrams can be automatically generated.


It’s difficult for me to agree with the argument that we should embrace complexity in software development. Complexity is something that must be combated in our design, in our implementation, in our processes. Using Cynefin as guide, our goal is to transition from the complex to the complicated and, look, the article uses AI as an example of complexity but AI operates in the complicated, by using patterns and knowledge to deliver decisions and answers.

I remember Dumbledore saying to Harry: “Soon we must all face the choice, between what is right and what is easy”. An architect’s job, and all engineers are architects to some extent, is to resist the temptation of the easiest solution; It is common in these situations to introduce accidental complexity. If we are talking about essential complexity, ok, but we should still fight to reduce it. But I don’t blame Thoughtworks for the approach, Dijkstra once said “complexity sells better”.

Read the Radar, it’s always interesting, especially check out what’s close to the adoption level.

API Days

This week I was in London attending the ApiDays event. Once a year I participate in an external event, in addition to studying alone, it is important to meet professional colleagues and talk about what happens in the trenches. To my surprise, AI was not the central topic, I watched around 30 lectures and only 4 were specifically about AI. Two major themes at the event:

API Governance: Emphasis given to the API lifecycle: definition->design->development-> testing->publication->operation. What I’ll take home:

  • The importance of documentation is the central point of your API, whether consumed by developers or read by robots.
  • The role of patterns in design. This is a growing market and your consumers expect you to follow OpenAPI, AsyncAPI, Semantic Versioning, HTTP response codes, Protocol Buffers definition language.
  • Your operation must provide freedom of choice, do not assume a cloud provider, do not force a gateway, be open.

Democratization of APIs: Here there are 2 views that are converging, on the one hand experts say that we should develop APIs thinking about devops and gitops. This vision places great importance on the governance aspects mentioned in the previous item; In addition, this vision highlights interoperability and composability as essential attributes for modern APIs. On the other hand, if we look at the most common composition of companies, only 10% are in the technology area (Gartner research shown in one of the lectures); We have 49% end users and 41% classified as business technologists. These 41% create technology or analysis solutions based on the solutions that the IT areas provide. They coined the term ‘post-API economy world’ to embrace these people who should have easy access to APIs. The simpler it is, the easier it will be for innovative products to emerge from the available information provided. This second vision focuses on open and public APIs, ecosystems and marketplaces.

I will mention 2 tools that I tested at the event and found fantastic: OK, there are API catalogs, we know the concept of observability, how do we deal with APIs that are similar but have different formats? Think of and OpenWeatherMap, both of which allow you to see if it’s going to rain in London today. But the similarity ends there, it is necessary to write code for each of them. Furthermore, if one of them is unavailable, you are responsible for routing the order. Superface deals with this: (1) you say what your contract is (2) Superface has autonomous agents that discover APIs consistent with your contract and maps your contract to the contract of the APIs it found. (ok, there’s AI here🙂)

Postman mocking servers: I often use Postman to test APIs, and I discovered that you can prototype your API in it. Design your methods, specify contracts from examples, and Postman creates documentation and an endpoint. When I wanted to think about an API, I used httpbin, this is much cooler.

Home Office

In 2008 I worked at Itaú in the IT for relationship with investment funds. I was part of a team responsible for fund performance reports, used by traders to guide their strategies. Our support was 24×7, especially during the night when these reports were generated. We discussed whether it would be possible for the engineer responsible for stand-by to work from home; our request was denied on the grounds that it was more productive to work in the bank’s infrastructure, where everything was available and where all the stand-by engineers were. 2020 came, Covid came and we know the result: we work very well in our homes. My experience and that of colleagues: we work better from our homes.

Zoom has grown exponentially precisely in this niche and last week we read that its CEO prefers work at the office. Eric Yuan points to speed of innovation and interaction as motivators, between the lines I read ‘our productivity needs to increase’, yeah, productivity again is the culprit. Also between the lines, I read that on Zoom the high command doesn’t trust people except when they’re around so they can see what people is doing. Microsoft showed this in the 2022 Pulse Report:
“The majority of employees (87%) report that they are productive at work (…)” but “85% of global business decision makers say that the shift to hybrid work has made it challenging to have confidence that people are being productive”.

I work in IT, this is the world that I know and about which I can give my opinion. Here, engineers are productive working from home and there are doubts in the C-level. McKinsey gave voice to this feeling and brought its idea 2 weeks ago: Yes, you can measure software developer productivity. Kent Beck wrote about this in his newsletter. Given that crisis of confidence that I mentioned, metrics are proposed to measure what I am doing. Kent is precise in his analysis, I agree: they confuse “activity” with “productivity”. Productivity is about delivering value and transforming your company.

The dichotomy of remote work vs. in-person work should not be the central discussion, trust and productivity are the central elements. And besides, do you want to go back to the office every day?

Architecture and AI

When I wrote my Master’s thesis, conceptualizing Creativity was essential and I used J.P. Guildford‘s ideas as a basis: creativity is the ability to exhibit creative behavior to a remarkable degree. He conceptualized creativity as a factor within a general theory of intelligence, involving the divergent thinking that could be developed through interaction between individuals and their environments. His proposal uses divergent cycles, which make it possible to create alternatives to a design problem from different perspectives, and convergent cycles, emphasizing the best option for the problem, with no room for ambiguity.

I’ve been revisiting this topic discussing on Reddit whether ChatGPT can be considered creative and more and more I tend to say “Yes”. Last month an architect suggested me the book “Architecture in the Age of Artificial Intelligence-An introduction to AI for architects“; after all, an architect must have original ideas, could it be replaced by ChatGPT? It’s a short book (180pp) and I recommend reading it, the first 2 chapters are a summary of techniques and how we got here in the field of Artificial Intelligence (but don’t expect anything in depth). The remaining chapters offer the view that knowledge workers like architects will be helped by AI tools but can also be overwhelmed by them. And that it is necessary to anticipate and understand where this happens and prepare. The reports about XKool are fascinating and this Guardian article has great pictures.In my reading, the book considers AI capable of being creative, something that Nick Cave’s latest newsletter dismantles. According to Nick, “ChatGPT is fast-tracking the commodification of the human spirit by mechanising the imagination”, sounds an anachronism to me.

I, as a software developer, also see myself as a knowledge worker and I am studying a lot to be ready. I have also been using my children and their friends to (try) to understand how the next generation sees the world in a few years. Both see a world dominated by AI and fearful of their place in this world. I tell them what I wrote above: be prepared.

Cypress, new kid on the block

Selenium has been my go-to tool for UI test automation for a while, and for good reason. It was the front-runner, proving its worth time and time again. I’ve used it extensively but lately, I’ve leaned towards Cypress. I made a course on Cypress (all of the exercises can be found in my GH), I’ve become thoroughly convinced that it presents a more efficient and reliable alternative.

When it comes to analysing the differences between Selenium and Cypress, a few key factors stand out. Selenium supports multiple languages including Java, Python, C#, and others while Cypress is purely JavaScript. This could initially seem limiting but considering most web applications are now JavaScript-based, Cypress becomes a natural choice. Selenium tests run outside the browser while Cypress runs directly inside the browser. This lets Cypress take control of the entire automation process including network traffic, timers, and even the loading of JavaScript code. It’s this inner control that promises stability – a promise Selenium often can’t keep due to its reliance on numerous third-party factors. The possibility to test your automation with debugger options in the browser is game-changer, I spend a lot of time understanding my scripts using the console, styles and network tabs, I am sure FE developers are all very familiar with them.

Cypress provides a simple and comprehensive API to interact directly with the DOM, allowing you to write simple, effective tests, take a look:

describe('DOM Interactions Test', function() {
  it('Fills and submits a form', function() {
    cy.visit('') // Cypress loads the webpage

    cy.get('input[name="firstName"]').type('John') // Cypress finds the input field firstName and types 'John' into it
    cy.get('input[name="lastName"]').type('Doe') // Cypress finds the input field lastName and types 'Doe' into it

    cy.get('button[type="submit"]').click() // Cypress clicks the submit button

Compare this with Selenium

WebDriver driver = new ChromeDriver();

driver.get(""); // Open URL 

WebElement firstName = driver.findElement("firstName"));
firstName.sendKeys("John"); // Fill the firstName input

WebElement lastName = driver.findElement("lastName"));
lastName.sendKeys("Doe"); // Fill the lastName input

WebElement submitButton = driver.findElement(By.tagName("button")); 
submitButton.submit(); // Submit the form

Cypress let you interact with CSS selectors!

Another key advantage of Cypress is its ability to handle asynchronous operations very easily.

describe('Asynchronous Handling Test', function() {
  it('Waits for an element to be visible', function() {
    cy.visit('') // Cypress loads the webpage

    cy.get('#asyncButton', { timeout: 10000 }) // Cypress waits for the element with id 'asyncButton' to become visible for up to 10 seconds
      .should('be.visible') // Asserts that the element should be visible
      .click() // Clicks the button once it's visible

Dealing with API’s is also a breeze:

describe("Todo List API Interaction", () => {
  // Define the base URL of your API
  const apiUrl = "";

  beforeEach(() => {
    // Intercept the API call and load a JSON payload for the response
    cy.intercept("GET", `${apiUrl}/todos`, { fixture: "todos.json" }).as("getTodos");
    // Visit the application or a specific page where the API call is made

  it("should fetch todos from the API and display them", () => {
    // Perform any actions that trigger the API call (e.g., click a button to fetch todos)

    // Wait for the API call to complete and the response to be displayed

    // Assert that the correct API endpoint was called and the response was handled properly
    cy.get(".todo-item").should("have.length", 3); // Assuming the response contains 3 todo items

What to me is the big advantage is my caveat: interacting with CSS selectors demands that you be careful to choose those that are least likely to change and break your tests. Still, I only plan to use Cypress for my UI tests.

Bloom Filters

One of my favorite data structures to use for efficient search operations is the Bloom filter, I just came across this cool demo, check it out.

Named after its creator, Burton Howard Bloom, a Bloom filter is a space-efficient probabilistic data structure designed to answer a simple question: “Is this item in the set?”. Unlike other data structures, a Bloom filter trades off accuracy for speed and space efficiency, meaning it might sometimes return false positives but never false negatives.

The Bloom filter works by using a bit vector of size ‘m’ and ‘k’ hash functions. When an element is added to the filter, it is passed through all ‘k’ hash functions, which return ‘k’ indexes to set to ‘1’ in the bit vector. To check whether an element is in the filter, the same ‘k’ hash functions are applied. If all ‘k’ indexes in the bit vector are ‘1’, the filter returns ‘yes’; otherwise, it returns ‘no’. A ‘yes’ answer can mean either the element is definitely not in the set (true negative) or it might be in the set (false positive), but a ‘no’ answer always means the element is not in the set (true negative). Note that when implementing, you should think carefully about how many hash functions to use and what the acceptable rate of false positives is.

Bloom filters are widely used in software applications where the cost of false positives is less critical than the benefits of speed and space efficiency. Some of these applications include spell checkers, network routers, databases, and caches. For instance, Google’s Bigtable uses Bloom filters to reduce the disk lookups for non-existent rows or columns, significantly improving its performance. Medium uses Bloom filters to avoid recommending articles a user has already read. I have a password generator application and use a Bloom Filter to check if a password has already been used.